Discussion:
On Hardforks in the Context of SegWit
(too old to reply)
Matt Corallo via bitcoin-dev
2016-02-08 19:26:48 UTC
Permalink
Hi all,

I believe we, today, have a unique opportunity to begin to close the
book on the short-term scaling debate.

First a little background. The scaling debate that has been gripping the
Bitcoin community for the past half year has taken an interesting turn
in 2016. Until recently, there have been two distinct camps - one
proposing a significant change to the consensus-enforced block size
limit to allow for more on-blockchain transactions and the other
opposing such a change, suggesting instead that scaling be obtained by
adding more flexible systems on top of the blockchain. At this point,
however, the entire Bitcoin community seems to have unified around a
single vision - roughly 2MB of transactions per block, whether via
Segregated Witness or via a hard fork, is something that can be both
technically supported and which adds more headroom before second-layer
technologies must be in place. Additionally, it seems that the vast
majority of the community agrees that segregated witness should be
implemented in the near future and that hard forks will be a necessity
at some point, and I don't believe it should be controversial that, as
we have never done a hard fork before, gaining experience by working
towards a hard fork now is a good idea.

With the apparent agreement in the community, it is incredibly
disheartening that there is still so much strife, creating a toxic
environment in which developers are not able to work, companies are
worried about their future ability to easily move Bitcoins, and
investors are losing confidence. The way I see it, this broad
unification of visions across all parts of the community places the
burden of selecting the most technically-sound way to achieve that
vision squarely on the development community.

Sadly, the strife is furthered by the huge risks involved in a hard fork
in the presence of strife, creating a toxic cycle which prevents a safe
hard fork. While there has been talk of doing an "emergency hardfork" as
an option, and while I do believe this is possible, it is not something
that will be easy, especially for something as controversial as rising
fees. Given that we have never done a hard fork before, being very
careful and deliberate in doing so is critical, and the technical
community working together to plan for all of the things that might go
wrong is key to not destroying significant value.

As such, I'd like to ask everyone involved to take this opportunity to
"reset", forgive past aggressions, and return the technical debates to
technical forums (ie here, IRC, etc).

As what a hard fork should look like in the context of segwit has never
(!) been discussed in any serious sense, I'd like to kick off such a
discussion with a (somewhat) specific proposal.

First some design notes:
* I think a key design feature should be taking this opportunity to add
small increases in decentralization pressure, where possible.
* Due to the several non-linear validation time issues in transaction
validation which are fixed by SegWit's signature-hashing changes, I
strongly believe any hard fork proposal which changes the block size
should rely on SegWit's existence.
* As with any hard fork proposal, its easy to end up pulling in hundreds
of small fixes for any number of protocol annoyances. In order to avoid
doing this, we should try hard to stick with a few simple changes.

Here is a proposed outline (to activate only after SegWit and with the
currently-proposed version of SegWit):

1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out
of the UTXO set, while keeping the maximum-sized block limited.

2) In order to prevent significant blowups in the cost to validate
pessimistic blocks, we must place additional limits on the size of many
non-segwit transactions. scriptPubKeys are now limited to 100 bytes in
size and may not contain OP_CODESEPARATOR, scriptSigs must be push-only
(ie no non-push opcodes), and transactions are only allowed to contain
up to 20 non-segwit inputs. Together these limits limit
total-bytes-hashed in block validation to under 200MB without any
possibility of making existing outputs unspendable and without adding
additional per-block limits which make transaction-selection-for-mining
difficult in the face of attacks or non-standard transactions. Though
200MB of hashing (roughly 2 seconds of hash-time on my high-end
workstation) is pretty strongly centralizing, limiting transactions to
fewer than 20 inputs seems arbitrarily low.

Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
1-per-50-bytes across the entire block to a per-transaction limit which
is slightly looser (though not too much looser - even with libsecp256k1
1-per-50-bytes represents 2 seconds of single-threaded validation in
just sigops on my high-end workstation).

3) Move SegWit's generic commitments from an OP_RETURN output to a
second branch in the merkle tree. Depending on the timeline this may be
something to skip - once there is tooling for dealing with the extra
OP_RETURN output as a generic commitment, the small efficiency gain for
applications checking the witness of only one transaction or checking a
non-segwit commitment may not be worth it.

4) Instead of requiring the first four bytes of the previous block hash
field be 0s, we allow them to contain any value. This allows Bitcoin
mining hardware to reduce the required logic, making it easier to
produce competitive hardware [1].

I'll deliberately leave discussion of activation method out of this
proposal. Both jl2012 and Luke-Jr recently begun some discussions about
methods for activation on this list, and I'd love to see those continue.
If folks think a hard fork should go ahead without SPV clients having a
say, we could table #4, or activate #4 a year or two after 1-3 activate.


[1] Simpler here may not be entirely true. There is potential for
optimization if you brute force the SHA256 midstate, but if nothing
else, this will prevent there being a strong incentive to use the
version field as nonce space. This may need more investigation, as we
may wish to just set the minimum difficulty higher so that we can add
more than 4 nonce-bytes.




Obviously we cannot reasonably move forward with a hard fork as long as
the contention in the community continues. Still, I'm confident
continuing to work towards SegWit as a 2MB-ish soft-fork in the short
term with some plans on what a hard fork should look like if we can form
broad consensus can go a long way to resolving much of the contention
we've seen.
jl2012--- via bitcoin-dev
2016-02-08 20:37:36 UTC
Permalink
Thanks for this proposal. Just some quick response:

1. The segwit hardfork (BIP HF) could be deployed with BIP141 (segwit
softfork). BIP141 doesn't need grace period. BIP HF will have around 1 year
of grace period.

2. Threshold is 95%. Using 4 versoin bits: a) BIP 141; b) BIP HF; c) BIP 141
if BIP HF has already got 95%; d) BIP HF if BIP141 has already got 95%.
Voting a and c (or b and d) at the same time is invalid. BIP 141 is
activated if a>95% or (a+c>95% and b+d>95%). BIP HF is activated if b>95% or
(a+c>95% and b+d>95%).

3. Fix time warp attack: this may break some SPV implementation

4. Limiting non-segwit inputs may make some existing signed tx invalid. My
proposal is: a) count the number of non-segwit sigop in a tx, including
those in unexecuted branch (sigop); b) measure the tx size without scripgSig
(size); c) a new rule is SUM(sigop*size) < some_value . This allows
calculation without actually running the script.


-----Original Message-----
From: bitcoin-dev-***@lists.linuxfoundation.org
[mailto:bitcoin-dev-***@lists.linuxfoundation.org] On Behalf Of Matt
Corallo via bitcoin-dev
Sent: Tuesday, 9 February, 2016 03:27
To: Bitcoin Dev <bitcoin-***@lists.linuxfoundation.org>
Subject: [bitcoin-dev] On Hardforks in the Context of SegWit

Hi all,

I believe we, today, have a unique opportunity to begin to close the book on
the short-term scaling debate.

First a little background. The scaling debate that has been gripping the
Bitcoin community for the past half year has taken an interesting turn in
2016. Until recently, there have been two distinct camps - one proposing a
significant change to the consensus-enforced block size limit to allow for
more on-blockchain transactions and the other opposing such a change,
suggesting instead that scaling be obtained by adding more flexible systems
on top of the blockchain. At this point, however, the entire Bitcoin
community seems to have unified around a single vision - roughly 2MB of
transactions per block, whether via Segregated Witness or via a hard fork,
is something that can be both technically supported and which adds more
headroom before second-layer technologies must be in place. Additionally, it
seems that the vast majority of the community agrees that segregated witness
should be implemented in the near future and that hard forks will be a
necessity at some point, and I don't believe it should be controversial
that, as we have never done a hard fork before, gaining experience by
working towards a hard fork now is a good idea.

With the apparent agreement in the community, it is incredibly disheartening
that there is still so much strife, creating a toxic environment in which
developers are not able to work, companies are worried about their future
ability to easily move Bitcoins, and investors are losing confidence. The
way I see it, this broad unification of visions across all parts of the
community places the burden of selecting the most technically-sound way to
achieve that vision squarely on the development community.

Sadly, the strife is furthered by the huge risks involved in a hard fork in
the presence of strife, creating a toxic cycle which prevents a safe hard
fork. While there has been talk of doing an "emergency hardfork" as an
option, and while I do believe this is possible, it is not something that
will be easy, especially for something as controversial as rising fees.
Given that we have never done a hard fork before, being very careful and
deliberate in doing so is critical, and the technical community working
together to plan for all of the things that might go wrong is key to not
destroying significant value.

As such, I'd like to ask everyone involved to take this opportunity to
"reset", forgive past aggressions, and return the technical debates to
technical forums (ie here, IRC, etc).

As what a hard fork should look like in the context of segwit has never
(!) been discussed in any serious sense, I'd like to kick off such a
discussion with a (somewhat) specific proposal.

First some design notes:
* I think a key design feature should be taking this opportunity to add
small increases in decentralization pressure, where possible.
* Due to the several non-linear validation time issues in transaction
validation which are fixed by SegWit's signature-hashing changes, I strongly
believe any hard fork proposal which changes the block size should rely on
SegWit's existence.
* As with any hard fork proposal, its easy to end up pulling in hundreds of
small fixes for any number of protocol annoyances. In order to avoid doing
this, we should try hard to stick with a few simple changes.

Here is a proposed outline (to activate only after SegWit and with the
currently-proposed version of SegWit):

1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out of
the UTXO set, while keeping the maximum-sized block limited.

2) In order to prevent significant blowups in the cost to validate
pessimistic blocks, we must place additional limits on the size of many
non-segwit transactions. scriptPubKeys are now limited to 100 bytes in size
and may not contain OP_CODESEPARATOR, scriptSigs must be push-only (ie no
non-push opcodes), and transactions are only allowed to contain up to 20
non-segwit inputs. Together these limits limit total-bytes-hashed in block
validation to under 200MB without any possibility of making existing outputs
unspendable and without adding additional per-block limits which make
transaction-selection-for-mining difficult in the face of attacks or
non-standard transactions. Though 200MB of hashing (roughly 2 seconds of
hash-time on my high-end
workstation) is pretty strongly centralizing, limiting transactions to fewer
than 20 inputs seems arbitrarily low.

Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
1-per-50-bytes across the entire block to a per-transaction limit which is
slightly looser (though not too much looser - even with libsecp256k1
1-per-50-bytes represents 2 seconds of single-threaded validation in just
sigops on my high-end workstation).

3) Move SegWit's generic commitments from an OP_RETURN output to a second
branch in the merkle tree. Depending on the timeline this may be something
to skip - once there is tooling for dealing with the extra OP_RETURN output
as a generic commitment, the small efficiency gain for applications checking
the witness of only one transaction or checking a non-segwit commitment may
not be worth it.

4) Instead of requiring the first four bytes of the previous block hash
field be 0s, we allow them to contain any value. This allows Bitcoin mining
hardware to reduce the required logic, making it easier to produce
competitive hardware [1].

I'll deliberately leave discussion of activation method out of this
proposal. Both jl2012 and Luke-Jr recently begun some discussions about
methods for activation on this list, and I'd love to see those continue.
If folks think a hard fork should go ahead without SPV clients having a say,
we could table #4, or activate #4 a year or two after 1-3 activate.


[1] Simpler here may not be entirely true. There is potential for
optimization if you brute force the SHA256 midstate, but if nothing else,
this will prevent there being a strong incentive to use the version field as
nonce space. This may need more investigation, as we may wish to just set
the minimum difficulty higher so that we can add more than 4 nonce-bytes.




Obviously we cannot reasonably move forward with a hard fork as long as the
contention in the community continues. Still, I'm confident continuing to
work towards SegWit as a 2MB-ish soft-fork in the short term with some plans
on what a hard fork should look like if we can form broad consensus can go a
long way to resolving much of the contention we've seen.
Tao Effect via bitcoin-dev
2016-02-08 22:24:01 UTC
Permalink
Hard forks should always come in response to some major crisis that all participants can agree is an actual crisis, as per the excellent rational here:

http://bitledger.info/why-a-hard-fork-should-be-fought-and-its-not-evil-to-discuss/

And here:

http://bitledger.info/hard-fork-risks-and-why-95-should-be-the-standard/

Also, if you’re going to do a hard fork, you’d better make the most of it as hard forks must be a *rare* world-is-ending-if-we-don’t-do-it thing (otherwise Bitcoin cannot be considered decentralized in any sense of the word).

So for any sort of hard fork, be sure to address the real threats and challenges that are facing Bitcoin today:

1. Mining centralization.
2. Privacy.

Best regards,
Greg Slepak
Post by jl2012--- via bitcoin-dev
1. The segwit hardfork (BIP HF) could be deployed with BIP141 (segwit
softfork). BIP141 doesn't need grace period. BIP HF will have around 1 year
of grace period.
2. Threshold is 95%. Using 4 versoin bits: a) BIP 141; b) BIP HF; c) BIP 141
if BIP HF has already got 95%; d) BIP HF if BIP141 has already got 95%.
Voting a and c (or b and d) at the same time is invalid. BIP 141 is
activated if a>95% or (a+c>95% and b+d>95%). BIP HF is activated if b>95% or
(a+c>95% and b+d>95%).
3. Fix time warp attack: this may break some SPV implementation
4. Limiting non-segwit inputs may make some existing signed tx invalid. My
proposal is: a) count the number of non-segwit sigop in a tx, including
those in unexecuted branch (sigop); b) measure the tx size without scripgSig
(size); c) a new rule is SUM(sigop*size) < some_value . This allows
calculation without actually running the script.
-----Original Message-----
Corallo via bitcoin-dev
Sent: Tuesday, 9 February, 2016 03:27
Subject: [bitcoin-dev] On Hardforks in the Context of SegWit
Hi all,
I believe we, today, have a unique opportunity to begin to close the book on
the short-term scaling debate.
First a little background. The scaling debate that has been gripping the
Bitcoin community for the past half year has taken an interesting turn in
2016. Until recently, there have been two distinct camps - one proposing a
significant change to the consensus-enforced block size limit to allow for
more on-blockchain transactions and the other opposing such a change,
suggesting instead that scaling be obtained by adding more flexible systems
on top of the blockchain. At this point, however, the entire Bitcoin
community seems to have unified around a single vision - roughly 2MB of
transactions per block, whether via Segregated Witness or via a hard fork,
is something that can be both technically supported and which adds more
headroom before second-layer technologies must be in place. Additionally, it
seems that the vast majority of the community agrees that segregated witness
should be implemented in the near future and that hard forks will be a
necessity at some point, and I don't believe it should be controversial
that, as we have never done a hard fork before, gaining experience by
working towards a hard fork now is a good idea.
With the apparent agreement in the community, it is incredibly disheartening
that there is still so much strife, creating a toxic environment in which
developers are not able to work, companies are worried about their future
ability to easily move Bitcoins, and investors are losing confidence. The
way I see it, this broad unification of visions across all parts of the
community places the burden of selecting the most technically-sound way to
achieve that vision squarely on the development community.
Sadly, the strife is furthered by the huge risks involved in a hard fork in
the presence of strife, creating a toxic cycle which prevents a safe hard
fork. While there has been talk of doing an "emergency hardfork" as an
option, and while I do believe this is possible, it is not something that
will be easy, especially for something as controversial as rising fees.
Given that we have never done a hard fork before, being very careful and
deliberate in doing so is critical, and the technical community working
together to plan for all of the things that might go wrong is key to not
destroying significant value.
As such, I'd like to ask everyone involved to take this opportunity to
"reset", forgive past aggressions, and return the technical debates to
technical forums (ie here, IRC, etc).
As what a hard fork should look like in the context of segwit has never
(!) been discussed in any serious sense, I'd like to kick off such a
discussion with a (somewhat) specific proposal.
* I think a key design feature should be taking this opportunity to add
small increases in decentralization pressure, where possible.
* Due to the several non-linear validation time issues in transaction
validation which are fixed by SegWit's signature-hashing changes, I strongly
believe any hard fork proposal which changes the block size should rely on
SegWit's existence.
* As with any hard fork proposal, its easy to end up pulling in hundreds of
small fixes for any number of protocol annoyances. In order to avoid doing
this, we should try hard to stick with a few simple changes.
Here is a proposed outline (to activate only after SegWit and with the
1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out of
the UTXO set, while keeping the maximum-sized block limited.
2) In order to prevent significant blowups in the cost to validate
pessimistic blocks, we must place additional limits on the size of many
non-segwit transactions. scriptPubKeys are now limited to 100 bytes in size
and may not contain OP_CODESEPARATOR, scriptSigs must be push-only (ie no
non-push opcodes), and transactions are only allowed to contain up to 20
non-segwit inputs. Together these limits limit total-bytes-hashed in block
validation to under 200MB without any possibility of making existing outputs
unspendable and without adding additional per-block limits which make
transaction-selection-for-mining difficult in the face of attacks or
non-standard transactions. Though 200MB of hashing (roughly 2 seconds of
hash-time on my high-end
workstation) is pretty strongly centralizing, limiting transactions to fewer
than 20 inputs seems arbitrarily low.
Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
1-per-50-bytes across the entire block to a per-transaction limit which is
slightly looser (though not too much looser - even with libsecp256k1
1-per-50-bytes represents 2 seconds of single-threaded validation in just
sigops on my high-end workstation).
3) Move SegWit's generic commitments from an OP_RETURN output to a second
branch in the merkle tree. Depending on the timeline this may be something
to skip - once there is tooling for dealing with the extra OP_RETURN output
as a generic commitment, the small efficiency gain for applications checking
the witness of only one transaction or checking a non-segwit commitment may
not be worth it.
4) Instead of requiring the first four bytes of the previous block hash
field be 0s, we allow them to contain any value. This allows Bitcoin mining
hardware to reduce the required logic, making it easier to produce
competitive hardware [1].
I'll deliberately leave discussion of activation method out of this
proposal. Both jl2012 and Luke-Jr recently begun some discussions about
methods for activation on this list, and I'd love to see those continue.
If folks think a hard fork should go ahead without SPV clients having a say,
we could table #4, or activate #4 a year or two after 1-3 activate.
[1] Simpler here may not be entirely true. There is potential for
optimization if you brute force the SHA256 midstate, but if nothing else,
this will prevent there being a strong incentive to use the version field as
nonce space. This may need more investigation, as we may wish to just set
the minimum difficulty higher so that we can add more than 4 nonce-bytes.
Obviously we cannot reasonably move forward with a hard fork as long as the
contention in the community continues. Still, I'm confident continuing to
work towards SegWit as a 2MB-ish soft-fork in the short term with some plans
on what a hard fork should look like if we can form broad consensus can go a
long way to resolving much of the contention we've seen.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Tao Effect via bitcoin-dev
2016-02-09 02:45:47 UTC
Permalink
Look, if we’re going to declare something an emergency, we cannot on the one hand say things like: "I strongly believe bitcoin has no place in the world if the fee raise much higher than a few cents per typically-sized transaction”, and on the other declare that there is an emergency worth redefining what *Bitcoin is* because the average txn fee is on the order of 7 cents [1] and has remained reasonable for some time [2].
Post by Tao Effect via bitcoin-dev
http://bitledger.info/why-a-hard-fork-should-be-fought-and-its-not-evil-to-discuss/
http://bitledger.info/hard-fork-risks-and-why-95-should-be-the-standard/
In terms of scaling, we are nowhere close to an emergency.

Scaling is priority #4, maybe, and it’s being taken care of.

Meanwhile, we should be directing our attention one the more pressing and serious concerns like mining centralization & privacy.

Mining centralization is a serious issue. It is *not cool* that 4 dudes (and 1 government) have the power to redefine what Bitcoin is *right now*.

Relevant post with suggestions for fixing that:

https://www.reddit.com/r/Bitcoin/comments/44kwf0/the_hardfork_that_bitcoin_really_needs_not/czrh3na

As far as I can tell, P2Pool & GBT are not the same thing, but I’ve been told that P2Pool might use GBT in some way, even though it’s listed on the wiki as not using it. [3]

A hard fork would ideally enforce decentralized mining pools somehow so that transaction selection is done at the edges instead of the center.

Cheers,
Greg

[1] http://www.cointape.com/
[2] https://blockchain.info/charts/transaction-fees
[3] https://en.bitcoin.it/wiki/Comparison_of_mining_pools
Post by Tao Effect via bitcoin-dev
Post by Tao Effect via bitcoin-dev
Also, if you’re going to do a hard fork, you’d better make the most of it as hard forks must be a *rare* world-is-ending-if-we-don’t-do-it thing
In my opinion, the network publishing more than 1MB worth of
transactions while the limit is still 1MB *is* an emergency worthy of
a hard fork.
If that's not an emergency, then what is?
I strongly believe bitcoin has no place in the world if the fee raise
much higher than a few cents per typically-sized transaction.
On 2/8/16, Tao Effect via bitcoin-dev
Post by Tao Effect via bitcoin-dev
Hard forks should always come in response to some major crisis that all
participants can agree is an actual crisis, as per the excellent rational
http://bitledger.info/why-a-hard-fork-should-be-fought-and-its-not-evil-to-discuss/
http://bitledger.info/hard-fork-risks-and-why-95-should-be-the-standard/
Also, if you’re going to do a hard fork, you’d better make the most of it as
hard forks must be a *rare* world-is-ending-if-we-don’t-do-it thing
(otherwise Bitcoin cannot be considered decentralized in any sense of the
word).
So for any sort of hard fork, be sure to address the real threats and
1. Mining centralization.
2. Privacy.
Best regards,
Greg Slepak
Simon Liu via bitcoin-dev
2016-02-08 22:36:47 UTC
Permalink
Post by Matt Corallo via bitcoin-dev
1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out
of the UTXO set, while keeping the maximum-sized block limited.
What is the rationale for offering a discount?

Is there an economic basis for setting the original discount at 75%
instead of some other number?

If it's okay to arbitrarily reduce the discount by 1/3, what are the
actual boundary limits: 50% - 75% ? 40% - 80% ?

--Simon
Peter Todd via bitcoin-dev
2016-02-08 22:54:36 UTC
Permalink
Post by Simon Liu via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out
of the UTXO set, while keeping the maximum-sized block limited.
What is the rationale for offering a discount?
UTXO set space is significantly more expensive for the network as all
full nodes must keep the entire UTXO set.

Additionally, transaction input/output data in general is argued by some
to be less expensive than signatures, as you have more options with
regard to skipping validation of signatures (e.g. how Bitcoin Core skips
validation of signatures prior to checkpoints).
Post by Simon Liu via bitcoin-dev
Is there an economic basis for setting the original discount at 75%
instead of some other number?
If it's okay to arbitrarily reduce the discount by 1/3, what are the
actual boundary limits: 50% - 75% ? 40% - 80% ?
So, something to keep in mind in general in all these discussions is
that at best engineering always has "magic numbers" involved, the
question is where?

For example, I've proposed that we use a 99% miner vote threshold for
hard-forks (remember that the threshold can always be soft-forked down
later). The rational there is, among other things, you want to ensure
that the non-adopting miners' chain is useless for transacting due to
extremely long block times, as well as we want it to receive
confirmations slowly to prevent fraud. (of course, there's also the
non-technical argument that we want to adopt hard-forks with extremely
wide adoption) At 99% the 1% remaining chain will have a block interval
of about 16 hours.

Now, I've been asked "why 99%? isn't that a magic number?"

I could have instead said my goal was to increase the block interval to
24 hours, in which case I'd have used a 99.3% threshold. But again,
isn't 24 hours a magic number? Why not 25hrs?

The answer is 24 hours *is* a magic number - but trying to eliminate
that with yet another meta level of engineering analysis becomes a game
of diminishing returns.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
000000000000000001ae7ca66e52359d67c407a739fde42b83ecc746d3ab735d
Anthony Towns via bitcoin-dev
2016-02-09 09:00:02 UTC
Permalink
Post by Matt Corallo via bitcoin-dev
As what a hard fork should look like in the context of segwit has never
(!) been discussed in any serious sense, I'd like to kick off such a
discussion with a (somewhat) specific proposal.
Here is a proposed outline (to activate only after SegWit and with the
Is this intended to be activated soon (this year?) or a while away
(2017, 2018?)?
Post by Matt Corallo via bitcoin-dev
1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out
of the UTXO set, while keeping the maximum-sized block limited.
This would mean the limits go from:

pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
1.5MB 2.1MB 2.2MB 3MB

That seems like a fairly small gain (20% for pubkeyhash, which would
last for about 3 months if you're growth rate means doubling every 9
months), so this probably makes the most sense as a "quick cleanup"
change, that also safely demonstrates how easy/difficult doing a hard
fork is in practice?

On the other hand, if segwit wallet deployment takes longer than
hoped, the 50% increase for pre-segwit transactions might be a useful
release-valve.

Doing a "2x" hardfork with the same reduction to a 50% segwit discount
would (I think) look like:

pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
2MB 2.8MB 2.9MB 4MB

which seems somewhat more appealing, without making the worst-case any
worse; but I guess there's concern about the relay networking scaling
above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?
Post by Matt Corallo via bitcoin-dev
2) In order to prevent significant blowups in the cost to validate
[...] and transactions are only allowed to contain
up to 20 non-segwit inputs. [...]
This could potentially make old, signed, but time-locked transactions
invalid. Is that a good idea?
Post by Matt Corallo via bitcoin-dev
Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
1-per-50-bytes across the entire block to a per-transaction limit which
is slightly looser (though not too much looser - even with libsecp256k1
1-per-50-bytes represents 2 seconds of single-threaded validation in
just sigops on my high-end workstation).
I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
limit would be a good addition, ie:

#define MAX_BLOCK_SIZE 1500000
#define MAX_BLOCK_DATA_SIZE 3000000
#define MAX_BLOCK_SIGOPS 50000

#define MAX_COST 3000000
#define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)
#define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)
#define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)

if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
Post by Matt Corallo via bitcoin-dev
MAX_COST)
{
block_is_invalid();
}

Though I think you'd need to bump up the worst-case limits somewhat to
make that work cleanly.
Post by Matt Corallo via bitcoin-dev
4) Instead of requiring the first four bytes of the previous block hash
field be 0s, we allow them to contain any value. This allows Bitcoin
mining hardware to reduce the required logic, making it easier to
produce competitive hardware [1].
[1] Simpler here may not be entirely true. There is potential for
optimization if you brute force the SHA256 midstate, but if nothing
else, this will prevent there being a strong incentive to use the
version field as nonce space. This may need more investigation, as we
may wish to just set the minimum difficulty higher so that we can add
more than 4 nonce-bytes.
Could you just use leading non-zero bytes of the prevhash as additional
nonce?

So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.

That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?

Cheers,
aj
Matt Corallo via bitcoin-dev
2016-02-09 21:54:01 UTC
Permalink
Thanks for keeping on-topic, replying to the proposal, and being civil!

Replies inline.
Post by Anthony Towns via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
As what a hard fork should look like in the context of segwit has never
(!) been discussed in any serious sense, I'd like to kick off such a
discussion with a (somewhat) specific proposal.
Here is a proposed outline (to activate only after SegWit and with the
Is this intended to be activated soon (this year?) or a while away
(2017, 2018?)?
It's intended to activate when we have clear and broad consensus around
a hard proposal across the community.
Post by Anthony Towns via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
1) The segregated witness discount is changed from 75% to 50%. The block
size limit (ie transactions + witness/2) is set to 1.5MB. This gives a
maximum block size of 3MB and a "network-upgraded" block size of roughly
2.1MB. This still significantly discounts script data which is kept out
of the UTXO set, while keeping the maximum-sized block limited.
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
1.5MB 2.1MB 2.2MB 3MB
That seems like a fairly small gain (20% for pubkeyhash, which would
last for about 3 months if you're growth rate means doubling every 9
months), so this probably makes the most sense as a "quick cleanup"
change, that also safely demonstrates how easy/difficult doing a hard
fork is in practice?
On the other hand, if segwit wallet deployment takes longer than
hoped, the 50% increase for pre-segwit transactions might be a useful
release-valve.
Doing a "2x" hardfork with the same reduction to a 50% segwit discount
pre-segwit segwit pkh segwit 2/2 msig worst case
1MB - - 1MB
1MB 1.7MB 2MB 4MB
2MB 2.8MB 2.9MB 4MB
which seems somewhat more appealing, without making the worst-case any
worse; but I guess there's concern about the relay networking scaling
above around 2MB per block, at least prior to IBLT/weak-blocks/whatever?
The goal isnt really to get a "gain" here...its mostly to decrease the
worst-case (4MB is pretty crazy) and keep the total size in-line with
what the network could handle. Getting 1MB blocks through the network in
under a second is already incredibly difficult...2MB is pretty scary and
will take lots of work...3MB is over the bound of "yea, we can pretty
for sure get that to work pretty well".
Post by Anthony Towns via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
2) In order to prevent significant blowups in the cost to validate
[...] and transactions are only allowed to contain
up to 20 non-segwit inputs. [...]
This could potentially make old, signed, but time-locked transactions
invalid. Is that a good idea?
Hmmmmmm...you make a valid point. I was trying to avoid breaking old
transactions, but didnt think too much about time-locked ones.
Hmmmmmm...we could apply the limits to txn that dont have at least one
"newer than the fork input", but I'm not sure I like that either.
Post by Anthony Towns via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
Along similar lines, we may wish to switch MAX_BLOCK_SIGOPS from
1-per-50-bytes across the entire block to a per-transaction limit which
is slightly looser (though not too much looser - even with libsecp256k1
1-per-50-bytes represents 2 seconds of single-threaded validation in
just sigops on my high-end workstation).
I think turning MAX_BLOCK_SIGOPS and MAX_BLOCK_SIZE into a combined
#define MAX_BLOCK_SIZE 1500000
#define MAX_BLOCK_DATA_SIZE 3000000
#define MAX_BLOCK_SIGOPS 50000
#define MAX_COST 3000000
#define SIGOP_COST (MAX_COST / MAX_BLOCK_SIGOPS)
#define BLOCK_COST (MAX_COST / MAX_BLOCK_SIZE)
#define DATA_COST (MAX_COST / MAX_BLOCK_DATA_SIZE)
if (utxo_data * BLOCK_COST + bytes * DATA_COST + sigops * SIGOP_COST
Post by Matt Corallo via bitcoin-dev
MAX_COST)
{
block_is_invalid();
}
Though I think you'd need to bump up the worst-case limits somewhat to
make that work cleanly.
There is a clear goal here of NOT using block-based limits and switching
to transaction-based limits. By switching to transaction-based limits we
avoid nasty issues with mining code either getting complicated or
enforcing too-strict limits on individual transactions.
Post by Anthony Towns via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
4) Instead of requiring the first four bytes of the previous block hash
field be 0s, we allow them to contain any value. This allows Bitcoin
mining hardware to reduce the required logic, making it easier to
produce competitive hardware [1].
[1] Simpler here may not be entirely true. There is potential for
optimization if you brute force the SHA256 midstate, but if nothing
else, this will prevent there being a strong incentive to use the
version field as nonce space. This may need more investigation, as we
may wish to just set the minimum difficulty higher so that we can add
more than 4 nonce-bytes.
Could you just use leading non-zero bytes of the prevhash as additional
nonce?
So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.
That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?
Matt Corallo via bitcoin-dev
2016-02-09 22:00:44 UTC
Permalink
Oops, forgot to reply to your last point.

Indeed, we could push for more place by just always having one 0-byte,
but I'm not sure the added complexity helps anything? ASICs can never be
designed which use more extra-nonce-space than what they can reasonably
assume will always be available, so we might as well just set the
maximum number of bytes and let ASIC designers know exactly what they
have available. Currently blocks start with at least 8 0-bytes. We could
just say minimum difficulty is now 6 0-bytes (2**16x harder) and reserve
those? Anyway, someone needs to take a closer look at the midstate
optimization stuff to see what is reasonable required.

Matt
Post by Anthony Towns via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
4) Instead of requiring the first four bytes of the previous block hash
field be 0s, we allow them to contain any value. This allows Bitcoin
mining hardware to reduce the required logic, making it easier to
produce competitive hardware [1].
[1] Simpler here may not be entirely true. There is potential for
optimization if you brute force the SHA256 midstate, but if nothing
else, this will prevent there being a strong incentive to use the
version field as nonce space. This may need more investigation, as we
may wish to just set the minimum difficulty higher so that we can add
more than 4 nonce-bytes.
Could you just use leading non-zero bytes of the prevhash as additional
nonce?
So to work out the actual prev hash, set leading bytes to zero until
you hit a zero. Conversely, to add nonce info to a hash, if there are
N leading zero bytes, fill up the first N-1 (or less) of them with
non-zero values.
That would give a little more than 255**(N-1) possible values
((255**N-1)/254) to be exact). That would actually scale automatically
with difficulty, and seems easy enough to make use of in an ASIC?
Luke Dashjr via bitcoin-dev
2016-02-09 22:10:43 UTC
Permalink
Post by Matt Corallo via bitcoin-dev
Indeed, we could push for more place by just always having one 0-byte,
but I'm not sure the added complexity helps anything? ASICs can never be
designed which use more extra-nonce-space than what they can reasonably
assume will always be available, so we might as well just set the
maximum number of bytes and let ASIC designers know exactly what they
have available. Currently blocks start with at least 8 0-bytes. We could
just say minimum difficulty is now 6 0-bytes (2**16x harder) and reserve
those?
The extranonce rolling doesn't necessarily need to happen in the ASIC itself.
With the current extranonce-in-gentx, an old RasPi 1 can only handle creating
work for up to 5 Gh/s with a 500k gentx.

Furthermore, there is a direct correlation between ASIC speeds and difficulty,
so increasing the extranonce space dynamically makes a lot of sense.

I don't see any reason *not* to increase the minimum difficulty at the same
time, though.

Luke
Matt Corallo via bitcoin-dev
2016-02-09 22:39:34 UTC
Permalink
Post by Luke Dashjr via bitcoin-dev
Post by Matt Corallo via bitcoin-dev
Indeed, we could push for more place by just always having one 0-byte,
but I'm not sure the added complexity helps anything? ASICs can never be
designed which use more extra-nonce-space than what they can reasonably
assume will always be available, so we might as well just set the
maximum number of bytes and let ASIC designers know exactly what they
have available. Currently blocks start with at least 8 0-bytes. We could
just say minimum difficulty is now 6 0-bytes (2**16x harder) and reserve
those?
The extranonce rolling doesn't necessarily need to happen in the ASIC itself.
With the current extranonce-in-gentx, an old RasPi 1 can only handle creating
work for up to 5 Gh/s with a 500k gentx.
Did you read the footnote on my original email? There is some potential
for optimization if you can brute-force the midstate, which today
requires using the nVersion space as nonce. In order to fix this we need
to add nonce space in the first compression function, so this is an
ideal place. Even ignoring that reducing complexity of mining control
stuff is really nice. If we could go back to just providing block
headers to miners instead of having to provide the entire
transaction-hash-list we could move a ton of complexity back into
Bitcoin Core from mining setups, which have historically been pretty
poorly-reviewed codebases.
Post by Luke Dashjr via bitcoin-dev
Furthermore, there is a direct correlation between ASIC speeds and difficulty,
so increasing the extranonce space dynamically makes a lot of sense.
I don't see any reason *not* to increase the minimum difficulty at the same
time, though.
Meh, my point was less that its a really bad idea and more that it adds
compexity that I dont see much need for.
Anthony Towns via bitcoin-dev
2016-02-10 05:16:56 UTC
Permalink
Post by Matt Corallo via bitcoin-dev
Indeed, we could push for more place by just always having one 0-byte,
but I'm not sure the added complexity helps anything? ASICs can never be
designed which use more extra-nonce-space than what they can reasonably
assume will always be available,
I was thinking ASICs could be passed a mask of which bytes they could
use for nonce; in which case the variable-ness can just be handled prior
to passing the work to the ASIC.

But on second thoughts, the block already specifies the target difficulty,
so maybe that could be used to indicate which bytes of the previous hash
must be zero? You have to be a bit careful to deal with the possibility
that you just did a maximum difficulty increase compared to the previous
block (in which case there may be fewer bits in the previous hash that
are zero), but that's just a factor of 4, so:

#define RETARGET_THRESHOLD ((1ul<<24) / 4)
y = 32 - bits[0];
if (bits[1]*65536 + bits[2]*256 + bits[3] >= RETARGET_THRESHOLD)
y -= 1;
memset(prevhash, 0x00, y); // clear "first" y bytes of prevhash

should work correctly/safely, and give you 8 bytes of additional nonce
to play with at current difficulty (or 3 bytes at minimum difficulty),
and scale as difficulty increases. No need to worry about avoiding zeroes
that way either.



As far as midstate optimisations go, rearranging the block to be:

version ; time ; bits ; merkleroot ; prevblock ; nonce

would mean that the last 12 bytes of prevblock and the 4 bytes of nonce
would be available for manipulation [0] if the first round of sha256
was pre-calculated prior to being sent to ASICs (and also that version
and time wouldn't be available). Worth considering?



I don't see how you'd make either of these changes compatible
with Luke-Jr's soft-hardfork approach [1] to ensuring non-upgraded
clients/nodes can't be tricked into following a shorter chain, though.
I think the approach I suggested in my mail avoid Gavin's proposed hard
fork might work though [2].



Combining these with making merge-mining easier [1] and Luke-Jr/Peter
Todd's ideas [3] about splitting the proof of work between something
visible to miners, and something only visible to pool operators to avoid
the block withholding attack on pooled mining would probably make sense
though, to reduce the number of hard forks visible to lightweight clients?

Cheers,
aj

[0] Giving a total of 128 bits, or 96 bits with difficulty such that
only the last 8 bytes of prevblock are available.

[1] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012377.html

[2] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/012046.html

[3] https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-February/012384.html
In particular, the paragraph beginning "Alternatively, if the old
blockchain has 10% of less hashpower ..."

Nicolas Dorier via bitcoin-dev
2016-02-09 12:32:06 UTC
Permalink
Post by Matt Corallo via bitcoin-dev
2) In order to prevent significant blowups in the cost to validate
[...] and transactions are only allowed to contain
up to 20 non-segwit inputs. [...]
There is two kind of hard fork, the one who breaks things, and the one who
does not.
Restricting the non-segwit inputs would disrupt lots of services, and
potentially invalidating
hash time locked transactions, which is a very bad precedent.
So I'm strongly against this particular point.
Post by Matt Corallo via bitcoin-dev
scriptPubKeys are now limited to 100 bytes in
size and may not contain OP_CODESEPARATOR, scriptSigs must be push-only
(ie no non-push opcodes)
Same problem for native multisig, however potentially less important than
the previous point.
Continue reading on narkive:
Loading...