Discussion:
[bitcoin-dev] Torrent-style new-block propagation on Merkle trees
Jonathan Toomim (Toomim Bros) via bitcoin-dev
2015-09-23 23:12:14 UTC
Permalink
As I understand it, the current block propagation algorithm is this:

1. A node mines a block.
2. It notifies its peers that it has a new block with an inv. Typical nodes have 8 peers.
3. The peers respond that they have not seen it, and request the block with getdata [hash].
4. The node sends out the block in parallel to all 8 peers simultaneously. If the node's upstream bandwidth is limiting, then all peers will receive most of the block before any peer receives all of the block. The block is sent out as the small header followed by a list of transactions.
5. Once a peer completes the download, it verifies the block, then enters step 2.

(If I'm missing anything, please let me know.)

The main problem with this algorithm is that it requires a peer to have the full block before it does any uploading to other peers in the p2p mesh. This slows down block propagation to O( p • log_p(n) ), where n is the number of peers in the mesh, and p is the number of peers transmitted to simultaneously.

It's like the Napster era of file-sharing. We can do much better than this. Bittorrent can be an example for us. Bittorrent splits the file to be shared into a bunch of chunks, and hashes each chunk. Downloaders (leeches) grab the list of hashes, then start requesting their peers for the chunks out-of-order. As each leech completes a chunk and verifies it against the hash, it begins to share those chunks with other leeches. Total propagation time for large files can be approximately equal to the transmission time for an FTP upload. Sometimes it's significantly slower, but often it's actually faster due to less bottlenecking on a single connection and better resistance to packet/connection loss. (This could be relevant for crossing the Chinese border, since the Great Firewall tends to produce random packet loss, especially on encrypted connections.)

Bitcoin uses a data structure for transactions with hashes built-in. We can use that in lieu of Bittorrent's file chunks.

A Bittorrent-inspired algorithm might be something like this:

0. (Optional steps to build a Merkle cache; described later)
1. A seed node mines a block.
2. It notifies its peers that it has a new block with an extended version of inv.
3. The leech peers request the block header.
4. The seed sends the block header. The leech code path splits into two.
5(a). The leeches verify the block header, including the PoW. If the header is valid,
6(a). They notify their peers that they have a header for an unverified new block with an extended version of inv, looping back to 2. above. If it is invalid, they abort thread (b).
5(b). The leeches request the Nth row (from the root) of the transaction Merkle tree, where N might typically be between 2 and 10. That corresponds to about 1/4th to 1/1024th of the transactions. The leeches also request a bitfield indicating which of the Merkle nodes the seed has leaves for. The seed supplies this (0xFFFF...).
6(b). The leeches calculate all parent node hashes in the Merkle tree, and verify that the root hash is as described in the header.
7. The leeches search their Merkle hash cache to see if they have the leaves (transaction hashes and/or transactions) for that node already.
8. The leeches send a bitfield request to the node indicating which Merkle nodes they want the leaves for.
9. The seed responds by sending leaves (either txn hashes or full transactions, depending on benchmark results) to the leeches in whatever order it decides is optimal for the network.
10. The leeches verify that the leaves hash into the ancestor node hashes that they already have.
11. The leeches begin sharing leaves with each other.
12. If the leaves are txn hashes, they check their cache for the actual transactions. If they are missing it, they request the txns with a getdata, or all of the txns they're missing (as a list) with a few batch getdatas.

The main feature of this algorithm is that a leech will begin to upload chunks of data as soon as it gets them and confirms both PoW and hash/data integrity instead of waiting for a fully copy with full verification.

This algorithm is more complicated than the existing algorithm, and won't always be better in performance. Because more round trip messages are required for negotiating the Merkle tree transfers, it will perform worse in situations where the bandwidth to ping latency ratio is high relative to the blocksize. Specifically, the minimum per-hop latency will likely be higher. This might be mitigated by reducing the number of round-trip messages needed to set up the blocktorrent by using larger and more complex inv-like and getdata-like messages that preemptively send some data (e.g. block headers). This would trade off latency for bandwidth overhead from larger duplicated inv messages. Depending on implementation quality, the latency for the smallest block size might be the same between algorithms, or it might be 300% higher for the torrent algorithm. For small blocks (perhaps < 100 kB), the blocktorrent algorithm will likely be slightly slower. For large blocks (e.g. 8 MB over 20 Mbps), I expect the blocktorrent algo will likely be around an order of magnitude faster in the worst case (adversarial) scenarios, in which none of the block's transactions are in the caches.

One of the big benefits of the blocktorrent algorithm is that it provides several obvious and straightforward points for bandwidth saving and optimization by caching transactions and reconstructing the transaction order. A cooperating miner can pre-announce Merkle subtrees with some of the transactions they are planning on including in the final block. Other miners who see those subtrees can compare the transactions in those subtrees to the transaction sets they are mining with, and can rearrange their block prototypes to use the same subtrees as much as possible. In the case of public pools supporting the getblocktemplate protocol, it might be possible to build Merkle subtree caches without the pool's help by having one or more nodes just scraping their getblocktemplate results. Even if some transactions are inserted or deleted, it may be possible to guess a lot of the tree based on the previous ordering.

Once a block header and the first few rows of the Merkle tree have been published, they will propagate through the whole network, at which time full nodes might even be able to guess parts of the tree by searching through their txn and Merkle node/subtree caches. That might be fun to think about, but probably not effective due to O(n^2) or worse scaling with transaction count. Might be able to make it work if the whole network cooperates on it, but there are probably more important things to do.

There are also a few other features of Bittorrent that would be useful here, like prioritizing uploads to different peers based on their upload capacity, and banning peers that submit data that doesn't hash to the right value. (It might be good if we could get Bram Cohen to help with the implementation.)

Another option is just to treat the block as a file and literally Bittorrent it, but I think that there should be enough benefits to integrating it with the existing bitcoin p2p connections and also with using bitcoind's transaction caches and Merkle tree caches to make a native implementation worthwhile. Also, Bittorrent itself was designed to optimize more for bandwidth than for latency, so we will have slightly different goals and tradeoffs during implementation.

One of the concerns that I initially had about this idea was that it would involve nodes forwarding unverified block data to other nodes. At first, I thought this might be useful for a rogue miner or node who wanted to quickly waste the whole network's bandwidth. However, in order to perform this attack, the rogue needs to construct a valid header with a valid PoW, but use a set of transactions that renders the block as a whole invalid in a manner that is difficult to detect without full verification. However, it will be difficult to design such an attack so that the damage in bandwidth used has a greater value than the 240 exahashes (and 25.1 BTC opportunity cost) associated with creating a valid header.

As I understand it, the O(1) IBLT approach requires that blocks follow strict rules (yet to be fully defined) about the transaction ordering. If these are not followed, then it turns into sending a list of txn hashes, and separately ensuring that all of the txns in the new block are already in the recipient's mempool. When mempools are very dissimilar, the IBLT approach performance degrades heavily and performance becomes worse than simply sending the raw block. This could occur if a node just joined the network, during chain reorgs, or due to malicious selfish miners. Also, if the mempool has a lot more transactions than are included in the block, the false positive rate for detecting whether a transaction already exists in another node's mempool might get high for otherwise reasonable bucket counts/sizes.

With the blocktorrent approach, the focus is on transmitting the list of hashes in a manner that propagates as quickly as possible while still allowing methods for reducing the total bandwidth needed. The blocktorrent algorithm does not really address how the actual transaction data will be obtained because, once the leech has the list of txn hashes, the standard Bitcoin p2p protocol can supply them in a parallelized and decentralized manner.



Thoughts?

-jtoomim
Angel Leon via bitcoin-dev
2015-09-24 00:31:27 UTC
Permalink
has anybody ever submitted a patch using libtorrent's library for this
purpose?
would it make sense to create a torrent per confirmed valid block once it's
been truly added to the blockchain?

if we used libtorrent, I imagine the following to announce the new block on
libtorrent's DHT

(PSEUDO CODE)

// this is how you could announce yourself as a peer on the DHT holding a
block at a certain height.
// the announcement would include a TCP port to then request more
information about that block.
string peer_announcement_key = sha1_hasher("blockchain::bitcoin::block::" +
last_block_height_number).final();
session.dht_announce(peer_announcement_key, rpc_port, 0);

// then another peer looking for peers that might be seeding a block at N
height would:
sessoin.dht_get_peers(sha1_hasher("blockchain::bitcoin::block::" +
N_height).final());

// asynchronously you'd handle the responses by the XOR-nearest peers on
the DHT
// this search would be O(log n)
// you could then request from several of these guys what's the infohash
for the .torrent
// that's been created to seed the block at height N.

then you'd start the block download by building a magnet link out of the
infohash received for the block.
The download would be done directly to memory and then such byte array
would be serialized as a block
as it is done now, and you'd then announce yourself both on the
sha1(peer_announcement_key) and
on the infohash of the block you just downloaded, after you start seeding
it.

perhaps the block's torrent chunk sizes could be made optimal so that
chunks sent would perfectly match
tranactions, this way you could start building the blocks on the other end
as they're being downloaded from the swarm.


http://twitter.com/gubatron

On Wed, Sep 23, 2015 at 7:12 PM, Jonathan Toomim (Toomim Bros) via
Post by Jonathan Toomim (Toomim Bros) via bitcoin-dev
1. A node mines a block.
2. It notifies its peers that it has a new block with an *inv*. Typical
nodes have 8 peers.
3. The peers respond that they have not seen it, and request the block
with *getdata* [hash].
4. The node sends out the block in parallel to all 8 peers simultaneously.
If the node's upstream bandwidth is limiting, then all peers will receive
most of the block before any peer receives all of the block. The block is
sent out as the small header followed by a list of transactions.
5. Once a peer completes the download, it verifies the block, then enters step 2.
(If I'm missing anything, please let me know.)
The main problem with this algorithm is that it requires a peer to have
the full block before it does any uploading to other peers in the p2p mesh.
This slows down block propagation to O( p • log_p(n) ), where n is the
number of peers in the mesh, and p is the number of peers transmitted to
simultaneously.
It's like the Napster era of file-sharing. We can do much better than
this. Bittorrent can be an example for us. Bittorrent splits the file to be
shared into a bunch of chunks, and hashes each chunk. Downloaders (leeches)
grab the list of hashes, then start requesting their peers for the chunks
out-of-order. As each leech completes a chunk and verifies it against the
hash, it begins to share those chunks with other leeches. Total propagation
time for large files can be approximately equal to the transmission time
for an FTP upload. Sometimes it's significantly slower, but often it's
actually faster due to less bottlenecking on a single connection and better
resistance to packet/connection loss. (This could be relevant for crossing
the Chinese border, since the Great Firewall tends to produce random packet
loss, especially on encrypted connections.)
Bitcoin uses a data structure for transactions with hashes built-in. We
can use that in lieu of Bittorrent's file chunks.
0. (Optional steps to build a Merkle cache; described later)
1. A seed node mines a block.
2. It notifies its peers that it has a new block with an extended version
of *inv*.
3. The leech peers request the block header.
4. The seed sends the block header. The leech code path splits into two.
5(a). The leeches verify the block header, including the PoW. If the header is valid,
6(a). They notify their peers that they have a header for an unverified
new block with an extended version of *inv*, looping back to 2. above. If
it is invalid, they abort thread (b).
5(b). The leeches request the Nth row (from the root) of the transaction
Merkle tree, where N might typically be between 2 and 10. That corresponds
to about 1/4th to 1/1024th of the transactions. The leeches also request a
bitfield indicating which of the Merkle nodes the seed has leaves for. The
seed supplies this (0xFFFF...).
6(b). The leeches calculate all parent node hashes in the Merkle tree, and
verify that the root hash is as described in the header.
7. The leeches search their Merkle hash cache to see if they have the
leaves (transaction hashes and/or transactions) for that node already.
8. The leeches send a bitfield request to the node indicating which Merkle
nodes they want the leaves for.
9. The seed responds by sending leaves (either txn hashes or full
transactions, depending on benchmark results) to the leeches in whatever
order it decides is optimal for the network.
10. The leeches verify that the leaves hash into the ancestor node hashes
that they already have.
11. The leeches begin sharing leaves with each other.
12. If the leaves are txn hashes, they check their cache for the actual
transactions. If they are missing it, they request the txns with a
*getdata*, or all of the txns they're missing (as a list) with a few
batch *getdata*s.
The main feature of this algorithm is that a leech will begin to upload
chunks of data as soon as it gets them and confirms both PoW and hash/data
integrity instead of waiting for a fully copy with full verification.
This algorithm is more complicated than the existing algorithm, and won't
always be better in performance. Because more round trip messages are
required for negotiating the Merkle tree transfers, it will perform worse
in situations where the bandwidth to ping latency ratio is high relative to
the blocksize. Specifically, the minimum per-hop latency will likely be
higher. This might be mitigated by reducing the number of round-trip
messages needed to set up the blocktorrent by using larger and more complex
*inv*-like and *getdata*-like messages that preemptively send some data
(e.g. block headers). This would trade off latency for bandwidth overhead
from larger duplicated *inv* messages. Depending on implementation
quality, the latency for the smallest block size might be the same between
algorithms, or it might be 300% higher for the torrent algorithm. For small
blocks (perhaps < 100 kB), the blocktorrent algorithm will likely be
slightly slower. For large blocks (e.g. 8 MB over 20 Mbps), I expect the
blocktorrent algo will likely be around an order of magnitude faster in the
worst case (adversarial) scenarios, in which none of the block's
transactions are in the caches.
One of the big benefits of the blocktorrent algorithm is that it provides
several obvious and straightforward points for bandwidth saving and
optimization by caching transactions and reconstructing the transaction
order. A cooperating miner can pre-announce Merkle subtrees with some of
the transactions they are planning on including in the final block. Other
miners who see those subtrees can compare the transactions in those
subtrees to the transaction sets they are mining with, and can rearrange
their block prototypes to use the same subtrees as much as possible. In the
case of public pools supporting the *getblocktemplate* protocol, it might
be possible to build Merkle subtree caches without the pool's help by
having one or more nodes just scraping their *getblocktemplate* results.
Even if some transactions are inserted or deleted, it may be possible to
guess a lot of the tree based on the previous ordering.
Once a block header and the first few rows of the Merkle tree have been
published, they will propagate through the whole network, at which time
full nodes might even be able to guess parts of the tree by searching
through their txn and Merkle node/subtree caches. That might be fun to
think about, but probably not effective due to O(n^2) or worse scaling with
transaction count. Might be able to make it work if the whole network
cooperates on it, but there are probably more important things to do.
There are also a few other features of Bittorrent that would be useful
here, like prioritizing uploads to different peers based on their upload
capacity, and banning peers that submit data that doesn't hash to the right
value. (It might be good if we could get Bram Cohen to help with the
implementation.)
Another option is just to treat the block as a file and literally
Bittorrent it, but I think that there should be enough benefits to
integrating it with the existing bitcoin p2p connections and also with
using bitcoind's transaction caches and Merkle tree caches to make a native
implementation worthwhile. Also, Bittorrent itself was designed to optimize
more for bandwidth than for latency, so we will have slightly different
goals and tradeoffs during implementation.
One of the concerns that I initially had about this idea was that it would
involve nodes forwarding unverified block data to other nodes. At first, I
thought this might be useful for a rogue miner or node who wanted to
quickly waste the whole network's bandwidth. However, in order to perform
this attack, the rogue needs to construct a valid header with a valid PoW,
but use a set of transactions that renders the block as a whole invalid in
a manner that is difficult to detect without full verification. However, it
will be difficult to design such an attack so that the damage in bandwidth
used has a greater value than the 240 exahashes (and 25.1 BTC opportunity
cost) associated with creating a valid header.
As I understand it, the O(1) IBLT approach requires that blocks follow
strict rules (yet to be fully defined) about the transaction ordering. If
these are not followed, then it turns into sending a list of txn hashes,
and separately ensuring that all of the txns in the new block are already
in the recipient's mempool. When mempools are very dissimilar, the IBLT
approach performance degrades heavily and performance becomes worse than
simply sending the raw block. This could occur if a node just joined the
network, during chain reorgs, or due to malicious selfish miners. Also, if
the mempool has a lot more transactions than are included in the block, the
false positive rate for detecting whether a transaction already exists in
another node's mempool might get high for otherwise reasonable bucket
counts/sizes.
With the blocktorrent approach, the focus is on transmitting the list of
hashes in a manner that propagates as quickly as possible while still
allowing methods for reducing the total bandwidth needed. The blocktorrent
algorithm does not really address how the actual transaction data will be
obtained because, once the leech has the list of txn hashes, the standard
Bitcoin p2p protocol can supply them in a parallelized and decentralized
manner.
Thoughts?
-jtoomim
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Tier Nolan via bitcoin-dev
2015-09-24 08:52:34 UTC
Permalink
On Thu, Sep 24, 2015 at 12:12 AM, Jonathan Toomim (Toomim Bros) via
Post by Jonathan Toomim (Toomim Bros) via bitcoin-dev
1. A node mines a block.
2. It notifies its peers that it has a new block with an *inv*. Typical
nodes have 8 peers.
3. The peers respond that they have not seen it, and request the block
with *getdata* [hash].
4. The node sends out the block in parallel to all 8 peers simultaneously.
If the node's upstream bandwidth is limiting, then all peers will receive
most of the block before any peer receives all of the block. The block is
sent out as the small header followed by a list of transactions.
5. Once a peer completes the download, it verifies the block, then enters step 2.
Mining pools currently connect to the "fast relay network". This is
optimised for fast block distribution. It does no validation and is only
for low latency propagation. The normal network is used as a fallback.

My understanding is that it works as follows:

Each miner runs a normal full node and a relay node on the same computer.

The full node tells the relay node whenever it receives a new transaction
via the inv message and the node requests the full transaction.

The relay node tells its relay peers that it knows about the transaction
(hash only) and its 4 byte key. This is not forwarded onwards, since the
relay peer only gets the hash of the transaction and doesn't do validation
anyway. The key is just a 4 byte counter.

Each relay node keeps a mapping of txid to key for each of its peer. There
is some garbage collection and entries are removed once the transaction is
included in a block (there might be a confirm threshold).

When a block is found, the local node sends it to the relay node. The
relay node then forwards it to all of its peers in a compact form.

The block is sent as a list of keys for that peer and full transactions are
only sent for unknown transactions.

When a relay node receives a block, it just verifies the POW, checks that
it is new and recent. It does not do tx validation. It forwards the block
to its local full node, which does the validation. Since the relay node is
on localhost, it never gets kicked due to sending invalid blocks. This
prevents a DOS attack where you could send invalid blocks to the relay node
and cause the local full node to kick it.

If all the transactions are already known, then it can forward a block for
only 4 bytes per transactions. I think it has an optimisation, so that is
compressed to 1 byte per tx.

Loading...