Discussion:
[bitcoin-dev] Weak block thoughts...
Gavin Andresen via bitcoin-dev
2015-09-23 15:43:11 UTC
Permalink
I've been thinking about 'weak blocks' and SPV mining, and it seems to me
weak blocks will make things better, not worse, if we improve the mining
code a little bit.

First: the idea of 'weak blocks' (hat tip to Rusty for the term) is for
miners to pre-announce blocks that they're working on, before they've
solved the proof-of-work puzzle. To prevent DoS attacks, assume that some
amount of proof-of-work is done (hence the term 'weak block') to rate-limit
how many 'weak block' messages are relayed across the network.


Today, miners are incentivized to start mining an empty block as soon as
they see a block with valid proof-of-work, because they want to spend as
little time as possible mining a not-best chain.

Imagine miners always pre-announce the blocks they're working on to their
peers, and peers validate those 'weak blocks' as quickly as they are able.

Because weak blocks are pre-validated, when a full-difficulty block based
on a previously announced weak block is found, block propagation should be
insanely fast-- basically, as fast as a single packet can be relayed across
the network the whole network could be mining on the new block.

I don't see any barrier to making accepting the full-difficulty block and
CreateNewBlock() insanely fast, and if those operations take just a
microsecond or three, miners will have an incentive to create blocks with
fee-paying transactions that weren't in the last block, rather than mining
empty blocks.

.................

A miner could try to avoid validation work by just taking a weak block
announced by somebody else, replacing the coinbase and re-computing the
merkle root, and then mining. They will be at a slight disadvantage to
fully validating miners, though, because they WOULD have to mine empty
blocks between the time a full block is found and a fully-validating miner
announced their next weak block.

.................

Weak block announcements are great for the network; they give transaction
creators a pretty good idea of whether or not their transactions are likely
to be confirmed in the next block. And if we're smart about implementing
them, they shouldn't increase bandwidth or CPU usage significantly, because
all the weak blocks at a given point in time are likely to contain the same
transactions.
--
--
Gavin Andresen
Bryan Bishop via bitcoin-dev
2015-09-23 16:07:08 UTC
Permalink
On Wed, Sep 23, 2015 at 10:43 AM, Gavin Andresen via bitcoin-dev <
Post by Gavin Andresen via bitcoin-dev
First: the idea of 'weak blocks' (hat tip to Rusty for the term)
Here are some other "weak blocks" and "near blocks" proposals or mentions:
https://bitcointalk.org/index.php?topic=179598.0
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-July/002976.html
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2013-September/003275.html
https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481
http://gnusha.org/bitcoin-wizards/2015-08-20.log

more recently:
http://gnusha.org/bitcoin-wizards/2015-09-20.log
http://diyhpl.us/wiki/transcripts/scalingbitcoin/roundgroup-roundup-1/
http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/

- Bryan
http://heybryan.org/
1 512 203 0507
Gregory Maxwell via bitcoin-dev
2015-09-23 19:24:06 UTC
Permalink
On Wed, Sep 23, 2015 at 4:07 PM, Bryan Bishop via bitcoin-dev
Post by Bryan Bishop via bitcoin-dev
http://gnusha.org/bitcoin-wizards/2015-09-20.log
http://diyhpl.us/wiki/transcripts/scalingbitcoin/roundgroup-roundup-1/
http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/
See also my response to Peter R's paper that was republished to the
list at http://pastebin.com/jFgkk8M3
(See sections at "For example, imagine if miners only include
transactions that were previously committed" and especially "Miners
volutarily participate in a fast consensus mechenism which commits to
transactions")

On Wed, Sep 23, 2015 at 3:43 PM, Gavin Andresen via bitcoin-dev
Post by Bryan Bishop via bitcoin-dev
Imagine miners always pre-announce the blocks they're working on to their
peers, and peers validate those 'weak blocks' as quickly as they are able.
Because weak blocks are pre-validated, when a full-difficulty block based on
a previously announced weak block is found, block propagation should be
insanely fast--
[...]
Post by Bryan Bishop via bitcoin-dev
A miner could try to avoid validation work by just taking a weak block
announced by somebody else, replacing the coinbase and re-computing the
merkle root, and then mining. They will be at a slight disadvantage to fully
Take care, here-- if a scheme is used where e.g. the full solution had
to be exactly identical to a prior weak block then the result would be
making mining not progress free because bigger miners would have
disproportionately more access to the weak/strong one/two punch. I
think what you're thinking here is okay, but it wasn't clear to me if
you'd caught that particular potential issue.

Avoiding this is why I've always previously described this idea as
merged mined block DAG (with blocks of arbitrary strength) which are
always efficiently deferentially coded against prior state. A new
solution (regardless of who creates it) can still be efficiently
transmitted even if it differs in arbitrary ways (though the
efficiency is less the more different it is).

There is a cost to these schemes-- additional overhead from
communicating the efficiently encoded weak blocks. But participation
in this overhead is optional and doesn't impact the history.

I'm unsure of what approach to take for incentive compatibility
analysis. In the worst case this approach class has no better delays
(and higher bandwidth); but it doesn't seem to me to give rise to any
immediate incrementally strategic behavior (or at least none worse
than you'd get from just privately using the same scheme).

On Wed, Sep 23, 2015 at 4:28 PM, Peter R via bitcoin-dev
Post by Bryan Bishop via bitcoin-dev
Shouldn't mining pools and miners be paying you guys for coding solutions
that improve their profitability?
The income to miners as a whole doesn't depend on these sorts of
optimizations, competitive advantages do... so better common open
infrastructure helps mostly in the case of putting propagation
disadvantaged miners on an equal playing field. You'll note that none
of them are exactly sharing their SPV mining source code right now....
in any case, there are simple, expedient, and low risk ways to improve
their equality in that respect: centralize (e.g. use bigger pools).
Gavin Andresen via bitcoin-dev
2015-09-23 21:37:25 UTC
Permalink
Post by Gregory Maxwell via bitcoin-dev
On Wed, Sep 23, 2015 at 3:43 PM, Gavin Andresen via bitcoin-dev
[...]
Post by Gavin Andresen via bitcoin-dev
A miner could try to avoid validation work by just taking a weak block
announced by somebody else, replacing the coinbase and re-computing the
merkle root, and then mining. They will be at a slight disadvantage to
fully
Take care, here-- if a scheme is used where e.g. the full solution had
to be exactly identical to a prior weak block then the result would be
making mining not progress free because bigger miners would have
disproportionately more access to the weak/strong one/two punch. I
think what you're thinking here is okay, but it wasn't clear to me if
you'd caught that particular potential issue.
I'm assuming the optimized protocol would be forward-error-coded (e.g.
using IBLTs) and NOT require the full solution (or follow-on weak blocks)
to be exactly the same.
Post by Gregory Maxwell via bitcoin-dev
Avoiding this is why I've always previously described this idea as
merged mined block DAG (with blocks of arbitrary strength) which are
always efficiently deferentially coded against prior state. A new
solution (regardless of who creates it) can still be efficiently
transmitted even if it differs in arbitrary ways (though the
efficiency is less the more different it is).
Yup, although I don't get the 'merge mined' bit; the weak blocks are
ephemeral, probably purged out of memory as soon as a few full blocks are
found...
Post by Gregory Maxwell via bitcoin-dev
I'm unsure of what approach to take for incentive compatibility
analysis. In the worst case this approach class has no better delays
(and higher bandwidth); but it doesn't seem to me to give rise to any
immediate incrementally strategic behavior (or at least none worse
than you'd get from just privately using the same scheme).
I don't see any incentive problems, either. Worst case is more miners
decide to skip validation and just mine a variation of the
highest-fee-paying weak block they've seen, but that's not a disaster--
invalid blocks will still get rejected by all the non-miners running full
nodes.

If we did see that behavior, I bet it would be a good strategy for a big
hashrate miner to dedicate some of their hashrate to announcing invalid
weak blocks; if you can get your lazy competitors to mine it, then you
win....
--
--
Gavin Andresen
Jonathan Toomim (Toomim Bros) via bitcoin-dev
2015-09-23 22:16:14 UTC
Permalink
Post by Gregory Maxwell via bitcoin-dev
Take care, here-- if a scheme is used where e.g. the full solution had
to be exactly identical to a prior weak block then the result would be
making mining not progress free because bigger miners would have
disproportionately more access to the weak/strong one/two punch. I
think what you're thinking here is okay, but it wasn't clear to me if
you'd caught that particular potential issue.
I'm assuming the optimized protocol would be forward-error-coded (e.g. using IBLTs) and NOT require the full solution (or follow-on weak blocks) to be exactly the same.
One possible improvement on this is to cache Merkle nodes/subtrees. When a weak block is created, nodes could cache the hashes for the Merkle nodes along with each node's children. A miner could then describe their block in terms of Merkle nodes (describing groups of 2^n transactions), which would give them the ability to copy e.g. 87.5% or 96.875% or whatever of their new block from someone else's weak block but with a few modifications (e.g. new transactions) in the remaining non-prespecified portion. This gives small miners the ability to trade off versatility (do I specify all of the transactions with my own Merkle structure?) versus propagation speed (do I copy all of my Merkle tree from another miner's weak block?) with all steps in between.

I've got a proposal for a block propagation protocol inspired by bittorrent applied to the Merkle tree instead of chunks of a file. Weak blocks would fit in with this blocktorrent protocol quite nicely by caching and pre-transmitting Merkle nodes. I don't want to hijack this thread, so I'll post it under a separate subject in an hour or so.
Rusty Russell via bitcoin-dev
2015-09-24 01:11:03 UTC
Permalink
Post by Gavin Andresen via bitcoin-dev
I don't see any incentive problems, either. Worst case is more miners
decide to skip validation and just mine a variation of the
highest-fee-paying weak block they've seen, but that's not a disaster--
invalid blocks will still get rejected by all the non-miners running full
nodes.
That won't help SPV nodes, unfortunately.
Post by Gavin Andresen via bitcoin-dev
If we did see that behavior, I bet it would be a good strategy for a big
hashrate miner to dedicate some of their hashrate to announcing invalid
weak blocks; if you can get your lazy competitors to mine it, then you
win....
We already see non-validating mining, but they do empty blocks. This
just makes it more attractive in the future, since you can collect fees
too.

But I think it's clear we'll eventually need some UTXO commitment so
full nodes can tell SPV nodes about bad blocks.

Cheers,
Rusty.
Gregory Maxwell via bitcoin-dev
2015-09-27 01:39:00 UTC
Permalink
Post by Gavin Andresen via bitcoin-dev
Post by Gregory Maxwell via bitcoin-dev
Avoiding this is why I've always previously described this idea as
merged mined block DAG (with blocks of arbitrary strength) which are
always efficiently deferentially coded against prior state. A new
solution (regardless of who creates it) can still be efficiently
transmitted even if it differs in arbitrary ways (though the
efficiency is less the more different it is).
Yup, although I don't get the 'merge mined' bit; the weak blocks are
ephemeral, probably purged out of memory as soon as a few full blocks are
found...
Unless the weak block transaction list can be a superset of the block
transaction list size proportional propagation costs are not totally
eliminated.

As even if the weak block criteria is MUCH lower than the block
criteria (which would become problematic in its own right at some
point) the network will sometimes find blocks when there hasn't been
any weak block priming at all (e.g. all prior priming has made it into
blocks already).

So if the weak block commitment must be exactly the block commitment
you end up having to add a small number of transactions to your block
above and beyond the latest well propagated weak-blocks... Could still
work, but then creates a pressure to crank up the weak block overhead
which could better be avoided.
Tier Nolan via bitcoin-dev
2015-09-27 09:42:24 UTC
Permalink
On Sun, Sep 27, 2015 at 2:39 AM, Gregory Maxwell via bitcoin-dev <
Post by Gregory Maxwell via bitcoin-dev
Unless the weak block transaction list can be a superset of the block
transaction list size proportional propagation costs are not totally
eliminated.
The POW threshold could be dynamic. The first weak-block that builds on a
new block could be forwarded with a smaller target.

This reduces the window size until at least one weak block is propagated.

The change in threshold could be time based (for the first 30 seconds or
so). This would cause a surge of traffic when a new block once a new block
has propagated, so perhaps not so good an idea.
Post by Gregory Maxwell via bitcoin-dev
As even if the weak block criteria is MUCH lower than the block
criteria (which would become problematic in its own right at some
point) the network will sometimes find blocks when there hasn't been
any weak block priming at all (e.g. all prior priming has made it into
blocks already).
If there is a transaction backlog, then miners could forward merkle
branches with transactions in the memory pool with a commitment in the
coinbase.
Kalle Rosenbaum via bitcoin-dev
2015-09-27 15:10:24 UTC
Permalink
I was mansplaining weak blocks to my wife. She asked a simple question:

Why would I, as a miner, publish a weak block if I find one?

I don't know.

Sure, I will get faster propagation for my solved block, should I find one.
On the other hand everybody else mining a similar block will enjoy the same
benefit. Assuming that I'm not a huge miner, it's unlikely that I will
actually solve the block, so I'm probably just giving away fast propagation
times to someone else.

So how does publishing a weak block benefit the producer of it more than
the other miners? Please help me understand this.

/Kalle Rosenbaum


2015-09-27 11:42 GMT+02:00 Tier Nolan via bitcoin-dev <
Post by Tier Nolan via bitcoin-dev
On Sun, Sep 27, 2015 at 2:39 AM, Gregory Maxwell via bitcoin-dev <
Post by Gregory Maxwell via bitcoin-dev
Unless the weak block transaction list can be a superset of the block
transaction list size proportional propagation costs are not totally
eliminated.
The POW threshold could be dynamic. The first weak-block that builds on a
new block could be forwarded with a smaller target.
This reduces the window size until at least one weak block is
propagated.
The change in threshold could be time based (for the first 30 seconds or
so). This would cause a surge of traffic when a new block once a new block
has propagated, so perhaps not so good an idea.
Post by Gregory Maxwell via bitcoin-dev
As even if the weak block criteria is MUCH lower than the block
criteria (which would become problematic in its own right at some
point) the network will sometimes find blocks when there hasn't been
any weak block priming at all (e.g. all prior priming has made it into
blocks already).
If there is a transaction backlog, then miners could forward merkle
branches with transactions in the memory pool with a commitment in the
coinbase.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Gregory Maxwell via bitcoin-dev
2015-09-27 19:50:22 UTC
Permalink
This post might be inappropriate. Click to display it.
Kalle Rosenbaum via bitcoin-dev
2015-09-28 08:30:42 UTC
Permalink
Post by Gregory Maxwell via bitcoin-dev
On Sun, Sep 27, 2015 at 3:10 PM, Kalle Rosenbaum via bitcoin-dev
Post by Kalle Rosenbaum via bitcoin-dev
Why would I, as a miner, publish a weak block if I find one?
I don't know.
Sure, I will get faster propagation for my solved block, should I find
one.
Post by Kalle Rosenbaum via bitcoin-dev
On the other hand everybody else mining a similar block will enjoy the
same
Post by Kalle Rosenbaum via bitcoin-dev
benefit. Assuming that I'm not a huge miner, it's unlikely that I will
actually solve the block, so I'm probably just giving away fast
propagation
Post by Kalle Rosenbaum via bitcoin-dev
times to someone else.
So how does publishing a weak block benefit the producer of it more than
the
Post by Kalle Rosenbaum via bitcoin-dev
other miners? Please help me understand this.
Keep in mind, because of efficient differential transmission the cost
to you is effectively nothing if your transaction acceptance policy is
predictable, it's a hand-full of bytes sent. And by failing to send
yours you do little to nothing to deny others the improvement.
Suppose that you've solved a block Z (weak or not) and you want to
propagate it using a previous weak block Y. With "efficient differential
transmission", I assume that you refer to the transmission of the
differences between Y and Z to a peer? What encodings are discussed? I
guess IBLTs are a hot candidate, but are there other schemes in the making?
I suppose that sending something like "weak block Y plus transactions A, B,
C minus transaction ids h(D), h(E)" is not considered an efficient
differential transmission. Then that's part of the answer to my question.
Post by Gregory Maxwell via bitcoin-dev
Every N seconds, every miner send to every other miner what they're
working on. This isn't totally crazy-- efficient differential
transmission will keep the amount transmitted small.
Any block found can be referenced to any of these earlier worklists.
What the effect be of not transmitting yours?
If your block is unlike everyone elses, you would suffer great delays
in the event you found a block.
If your block is mostly like everyone elses, you wouldn't suffer as
much delay-- but the transmission costs would be negligible in that
case. ... the size sent is proportional to the improvement you get
when finding a block.
"the size sent is proportional to the improvement you get when finding a
block." - This encapsulates the issue quite well! The more exotic block I'm
building, the more I would benefit from publishing a weak block, but my
weak block would also be larger.
Post by Gregory Maxwell via bitcoin-dev
In either case, no one else is harmed by you not sending yours... they
still send their lists.
A problem with that scheme is that unless you've layered an identity
based access control system on it anyone can DOS attack it, because
anyone can send as much as they want, they don't even have to be
actual miners.
What weak blocks adds to that is using hashcash as a rate limiting
mechanism-- a coordination free lottery weighed by hash-power decides
who can transmit.
What if you don't participate in the lottery and share your solutions?
No major harm for the other users... the other users will just choose
a somewhat lower weak-block threshold to get the updates at the
desired rate than they would otherwise. To the extent that what you
were working on was different from anyone else, you'll suffer because
you failed to make use of your chance to influence what could be
efficiently transmitted to include your own blocks.
Makes perfect sense. Also, if I'm working on an exotic block, the
probability of someone extending my weak block would be low-ish, so I'm not
necessarily "giving away fast propagation times to someone else" as I first
thought.
Post by Gregory Maxwell via bitcoin-dev
You could also ask a question of why would you transitively relay
someone elses announcement-- well if it helped their blocks too (by
reflecting things they also want to mine) the answer is obvious. But
what if it was disjoint from the things they wanted to mine and didn't
help compared to the weak blocks they already relayed? In that case
it's still in likely in their interest to relay it because if a block
similar to it is produced and they extend that block they may end up
orphaned because of propagation delays their parent block suffered.
What if they receive an announcement which is so "ugly" that they
wouldn't extend the chain with the strong block version of it (they'd
intentionally try to fork it off?)-- in that case they wouldn't want
to relay it. So much the same logic as why you relay other parties
blocks applies, including-- relaying helps the network, but if you
don't it'll still get along fine without you.
Thank you very much for your explanation.

/Kalle
Jonathan Toomim (Toomim Bros) via bitcoin-dev
2015-09-28 13:30:22 UTC
Permalink
Suppose that you've solved a block Z (weak or not) and you want to propagate it using a previous weak block Y. With "efficient differential transmission", I assume that you refer to the transmission of the differences between Y and Z to a peer? What encodings are discussed? I guess IBLTs are a hot candidate, but are there other schemes in the making? I suppose that sending something like "weak block Y plus transactions A, B, C minus transaction ids h(D), h(E)" is not considered an efficient differential transmission. Then that's part of the answer to my question.
IBLTs are effective for synchronizing mempools, to ensure that all nodes in a network can successfully map a transaction hash to a full transaction. However, IBLTs do not help with the ordering of the transactions.

Encoding the new blocks as a diff (delete bytes x through y, insert string s after byte z) based on a weak block would probably be pretty effective, but it would probably require a lot of memory for keeping a weak block (or chain of diffs) for each miner that publishes weak blocks. It might be a little complicated to manage and remove duplicate information between weak blocks published by different sources. You'd probably have to build a weak block tree or DAG with diffs as edges, and walk the tree each time you wanted to fetch a (weak) block.

Another strategy is to use the Merkle tree nodes. Each node is a hash of its concatenated child nodes, Each node thus specifies the order of 2^n transaction hashes. Changing one transaction hash requires modifying log_2(n) Merkle node hashes, which is okay but maybe not as good as the diff approach. However, the main benefit comes from compressing and storing data from many different weak blocks generated by different miners. You can build a cache of Merkle nodes, and each time you get a new weak block, you can add any new Merkle nodes to that cache. There's some more info on this here: http://bitcoin-development.narkive.com/dGIxjVI5/torrent-style-new-block-propagation-on-merkle-trees

Merkle tree encodings handle replacements of transactions well, but they have trouble with insertions or deletions near the beginning of a block. Efforts could be made to avoid insertions and deletions in the actual transaction ordering to improve transmissibility, or a hybrid system could be implemented in which byte-level diffs or transaction-level diffs are used for transmitting the weak blocks as a diff against previously cached Merkle nodes.

Or maybe there's a better way.

Btc Drak via bitcoin-dev
2015-09-23 16:07:31 UTC
Permalink
For anyone who missed the discussions of weak blocks, here are the Scaling
Bitcoin's transcripts:

http://diyhpl.us/wiki/transcripts/scalingbitcoin/bitcoin-block-propagation-iblt-rusty-russell/

http://diyhpl.us/wiki/transcripts/scalingbitcoin/roundgroup-roundup-1/
(under Network Propagation).

On Wed, Sep 23, 2015 at 4:43 PM, Gavin Andresen via bitcoin-dev <
Post by Gavin Andresen via bitcoin-dev
I've been thinking about 'weak blocks' and SPV mining, and it seems to me
weak blocks will make things better, not worse, if we improve the mining
code a little bit.
First: the idea of 'weak blocks' (hat tip to Rusty for the term) is for
miners to pre-announce blocks that they're working on, before they've
solved the proof-of-work puzzle. To prevent DoS attacks, assume that some
amount of proof-of-work is done (hence the term 'weak block') to rate-limit
how many 'weak block' messages are relayed across the network.
Today, miners are incentivized to start mining an empty block as soon as
they see a block with valid proof-of-work, because they want to spend as
little time as possible mining a not-best chain.
Imagine miners always pre-announce the blocks they're working on to their
peers, and peers validate those 'weak blocks' as quickly as they are able.
Because weak blocks are pre-validated, when a full-difficulty block based
on a previously announced weak block is found, block propagation should be
insanely fast-- basically, as fast as a single packet can be relayed across
the network the whole network could be mining on the new block.
I don't see any barrier to making accepting the full-difficulty block and
CreateNewBlock() insanely fast, and if those operations take just a
microsecond or three, miners will have an incentive to create blocks with
fee-paying transactions that weren't in the last block, rather than mining
empty blocks.
.................
A miner could try to avoid validation work by just taking a weak block
announced by somebody else, replacing the coinbase and re-computing the
merkle root, and then mining. They will be at a slight disadvantage to
fully validating miners, though, because they WOULD have to mine empty
blocks between the time a full block is found and a fully-validating miner
announced their next weak block.
.................
Weak block announcements are great for the network; they give transaction
creators a pretty good idea of whether or not their transactions are likely
to be confirmed in the next block. And if we're smart about implementing
them, they shouldn't increase bandwidth or CPU usage significantly, because
all the weak blocks at a given point in time are likely to contain the same
transactions.
--
--
Gavin Andresen
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter R via bitcoin-dev
2015-09-23 16:28:20 UTC
Permalink
Hi Gavin,

One thing that's not clear to me is whether it is even necessary--from the perspective of the block size limit--to consider block propagation. Bitcoin has been successfully operating unconstrained by the block size limit over its entire history (except for in the past few months)--block propagation never entered into the equation.

Imagine that the limit were raised to significantly above the free market equilibrium block size Q*. Mining pools and other miners would then have an incentive to work out schemes like "weak blocks," relay networks, IBLTs, etc., in order to reduce the risk of orphaning larger blocks (blocks with more fees that they'd like to produce if it were profitable).

Shouldn't mining pools and miners be paying you guys for coding solutions that improve their profitability?

Best regards,
Peter
I've been thinking about 'weak blocks' and SPV mining, and it seems to me weak blocks will make things better, not worse, if we improve the mining code a little bit.
First: the idea of 'weak blocks' (hat tip to Rusty for the term) is for miners to pre-announce blocks that they're working on, before they've solved the proof-of-work puzzle. To prevent DoS attacks, assume that some amount of proof-of-work is done (hence the term 'weak block') to rate-limit how many 'weak block' messages are relayed across the network.
Today, miners are incentivized to start mining an empty block as soon as they see a block with valid proof-of-work, because they want to spend as little time as possible mining a not-best chain.
Imagine miners always pre-announce the blocks they're working on to their peers, and peers validate those 'weak blocks' as quickly as they are able.
Because weak blocks are pre-validated, when a full-difficulty block based on a previously announced weak block is found, block propagation should be insanely fast-- basically, as fast as a single packet can be relayed across the network the whole network could be mining on the new block.
I don't see any barrier to making accepting the full-difficulty block and CreateNewBlock() insanely fast, and if those operations take just a microsecond or three, miners will have an incentive to create blocks with fee-paying transactions that weren't in the last block, rather than mining empty blocks.
.................
A miner could try to avoid validation work by just taking a weak block announced by somebody else, replacing the coinbase and re-computing the merkle root, and then mining. They will be at a slight disadvantage to fully validating miners, though, because they WOULD have to mine empty blocks between the time a full block is found and a fully-validating miner announced their next weak block.
.................
Weak block announcements are great for the network; they give transaction creators a pretty good idea of whether or not their transactions are likely to be confirmed in the next block. And if we're smart about implementing them, they shouldn't increase bandwidth or CPU usage significantly, because all the weak blocks at a given point in time are likely to contain the same transactions.
--
--
Gavin Andresen
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Gavin via bitcoin-dev
2015-09-23 17:40:58 UTC
Permalink
Post by Peter R via bitcoin-dev
Hi Gavin,
One thing that's not clear to me is whether it is even necessary--from the perspective of the block size limit--to consider block propagation.
I didn't mention the block size limit; weak blocks are a good idea no matter the limit.

As for miners paying for the work: lots of companies contributed to the Foundation, and will contribute to the DCI. When there are big, stable, profitable companies I think we'll see them task their developers to contribute code.

I think optimizing new block propagation is interesting and important, so I plan on working on it.
Peter R via bitcoin-dev
2015-09-23 17:49:04 UTC
Permalink
Thanks for the reply, Gavin. I agree on all points.

Peter
Tier Nolan via bitcoin-dev
2015-09-23 18:48:38 UTC
Permalink
On Wed, Sep 23, 2015 at 4:43 PM, Gavin Andresen via bitcoin-dev <
Post by Gavin Andresen via bitcoin-dev
Imagine miners always pre-announce the blocks they're working on to their
peers, and peers validate those 'weak blocks' as quickly as they are able.
Because weak blocks are pre-validated, when a full-difficulty block based
on a previously announced weak block is found, block propagation should be
insanely fast-- basically, as fast as a single packet can be relayed across
the network the whole network could be mining on the new block.
I don't see any barrier to making accepting the full-difficulty block and
CreateNewBlock() insanely fast, and if those operations take just a
microsecond or three, miners will have an incentive to create blocks with
fee-paying transactions that weren't in the last block, rather than mining
empty blocks.
You can create these blocks in advance too.

- receive weak block
- validate
- create child block

It becomes a pure array lookup to get the new header that builds on top of
that block. The child blocks would need to be updated as the memory pool
changes though.
Post by Gavin Andresen via bitcoin-dev
A miner could try to avoid validation work by just taking a weak block
announced by somebody else, replacing the coinbase and re-computing the
merkle root, and then mining. They will be at a slight disadvantage to
fully validating miners, though, because they WOULD have to mine empty
blocks between the time a full block is found and a fully-validating miner
announced their next weak block.
This also speeds up propagation for the miner. The first weak block that
is broadcast could end up being copied by many other miners.

A miner who is copying a block could send coinbase + original header if he
hits a block. Weak blocks that are just coinbase + header could have lower
POW requirements, since they use up much less bandwidth.

Miners would mostly copy other miners once they had verified their blocks.
The IBLT system works well here. A miner could pick a weak block that is
close to what it actually wants to broadcast.
Post by Gavin Andresen via bitcoin-dev
Weak block announcements are great for the network; they give transaction
creators a pretty good idea of whether or not their transactions are likely
to be confirmed in the next block.
Aggregator nodes could offer a service to show/prove how many weak blocks
that the transaction has been accepted in.
Post by Gavin Andresen via bitcoin-dev
And if we're smart about implementing them, they shouldn't increase
bandwidth or CPU usage significantly, because all the weak blocks at a
given point in time are likely to contain the same transactions.
This assumes other compression systems for handling block propagation.
Post by Gavin Andresen via bitcoin-dev
--
--
Gavin Andresen
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Rusty Russell via bitcoin-dev
2015-09-24 01:32:34 UTC
Permalink
Post by Gavin Andresen via bitcoin-dev
I've been thinking about 'weak blocks' and SPV mining, and it seems to me
weak blocks will make things better, not worse, if we improve the mining
code a little bit.
First: the idea of 'weak blocks' (hat tip to Rusty for the term) is for
miners to pre-announce blocks that they're working on, before they've
solved the proof-of-work puzzle. To prevent DoS attacks, assume that some
amount of proof-of-work is done (hence the term 'weak block') to rate-limit
how many 'weak block' messages are relayed across the network.
Today, miners are incentivized to start mining an empty block as soon as
they see a block with valid proof-of-work, because they want to spend as
little time as possible mining a not-best chain.
Imagine miners always pre-announce the blocks they're working on to their
peers, and peers validate those 'weak blocks' as quickly as they are able.
Because weak blocks are pre-validated, when a full-difficulty block based
on a previously announced weak block is found, block propagation should be
insanely fast-- basically, as fast as a single packet can be relayed across
the network the whole network could be mining on the new block.
The bandwidth/latency argument is solid. And if a block encodes to <
~3k, then we can just spray it to (some?) peers rather than using INV.

But validation is only trivially cachable if the delta to the previous
weak block is zero. The "partially validated" cases need to be coded
with care (eg. total opcode constraints, tx order).

I was thinking as a first cut we do the opposite: don't validate weak
blocks at all (other than PoW), and just use them as a bandwidth
optimization.

Ambition is good though!

Chers,
Rusty.
PS. Original idea came to me from Greg Maxwell; Peter Todd called it
"near blocks" and extolled their virtues 2 years ago...
Continue reading on narkive:
Loading...