Discussion:
Segregated Witness features wish list
(too old to reply)
jl2012--- via bitcoin-dev
2015-12-10 06:47:35 UTC
Permalink
It seems the current consensus is to implement Segregated Witness. SW
opens many new possibilities but we need a balance between new features
and deployment time frame. I'm listing by my priority:

1-2 are about scalability and have highest priority

1. Witness size limit: with SW we should allow a bigger overall block
size. It seems 2MB is considered to be safe for many people. However,
the exact size and growth of block size should be determined based on
testing and reasonable projection.

2. Deployment time frame: I prefer as soon as possible, even if none of
the following new features are implemented. This is not only a technical
issue but also a response to the community which has been waiting for a
scaling solution for years

3-6 promote safety and reduce level of trust (higher priority)

3. SIGHASH_WITHINPUTVALUE [1]: there are many SIGHASH proposals but this
one has the highest priority as it makes offline signing much easier.

4. Sum of fee, sigopcount, size etc as part of the witness hash tree:
for compact proof of violations in these parameters. I prefer to have
this feature in SWv1. Otherwise, that would become an ugly softfork in
SWv2 as we need to maintain one more hash tree

5. Height and position of an input as part of witness will allow compact
proof of non-existing UTXO. We need this eventually. If it is not done
in SWv1, we could softfork it nicely in SWv2. I prefer this earlier as
this is the last puzzle for compact fraud proof.

6. BIP62 and OP_IF malleability fix [2] as standardness rules:
involuntary malleability may still be a problem in the relay network and
may make the relay less efficient (need more research)

7-15 are new features and long-term goals (lower priority)

7. Enable OP_CAT etc:
OP_CAT will allow tree signatures described by [3]. Even without Schnorr
signature, m-of-n multisig will become more efficient if m < n.

OP_SUBSTR/OP_LEFT/OP_RIGHT will allow people to shorten a payment
address, while sacrificing security.

I'm not sure how those disabled bitwise logic codes could be useful

Multiplication and division may still considered to be risky and not
very useful?

8. Schnorr signature: for very efficient multisig [3] but could be
introduced later.

9. Per-input lock-time and relative lock-time: define lock-time and
relative lock-time in witness, and signed by user. BIP68 is not a very
ideal solution due to limited lock time length and resolution

10. OP_PUSHLOCKTIME and OP_PUSHRELATIVELOCKTIME: push the lock-time and
relative lock-time to stack. Will allow more flexibility than OP_CLTV
and OP_CSV

11. OP_RETURNTURE which allows softfork of any new OP codes [4]. It is
not really necessary with the version byte design but with OP_RETURNTURE
we don't need to pump the version byte too frequently.

12. OP_EVAL (BIP12), which enables Merkleized Abstract Syntax Trees
(MAST) with OP_CAT [5]. This will also obsolete BIP16. Further
restrictions should be made to make it safe [6]:
a) We may allow at most one OP_EVAL in the scriptPubKey
b) Not allow any OP_EVAL in the serialized script, nor anywhere else in
the witness (therefore not Turing-complete)
c) In order to maintain the ability to statically analyze scripts, the
serialized script must be the last push of the witness (or script
fails), and OP_EVAL must be the last OP code in the scriptPubKey

13. Combo OP codes for more compact scripts, for example:

OP_MERKLEHASH160, if executed, is equivalent to OP_SWAP OP_IF OP_SWAP
OP_ENDIF OP_CAT OP_HASH160 [3]. Allowing more compact tree-signature and
MAST scripts.

OP_DUPTOALTSTACK, OP_DUPFROMALTSTACK: copy to / from alt stack without
removing the item

14. UTXO commitment: good but not in near future

15. Starting as a softfork, moving to a hardfork? SW Softfork is a quick
but dirty solution. I believe a hardfork is unavoidable in the future,
as the 1MB limit has to be increased someday. If we could plan it ahead,
we could have a much cleaner SW hardfork in the future, with codes
pre-announced for 2 years.


[1] https://bitcointalk.org/index.php?topic=181734.0
[2]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-November/011679.html
[3] https://blockstream.com/2015/08/24/treesignatures/
[4] https://bitcointalk.org/index.php?topic=1106586.0
[5]
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-September/010977.html
[6] https://bitcointalk.org/index.php?topic=58579.msg690093#msg690093
Gregory Maxwell via bitcoin-dev
2015-12-10 08:26:04 UTC
Permalink
On Thu, Dec 10, 2015 at 6:47 AM, jl2012--- via bitcoin-dev
It seems the current consensus is to implement Segregated Witness. SW opens
many new possibilities but we need a balance between new features and
2. Deployment time frame: I prefer as soon as possible, even if none of the following new features are implemented.
Thanks, I agree there.

A point to keep in mind: Segregated Witness was specifically designed
to make script changes / improvements / additions / total rewrites no
harder to do _after_ SW then they would be do do along with it. For
many people the "ah ha! lets do this" was realizing it could be a
pretty clean soft-fork. For me, it was realizing that we could
structure Segwit in a way that radically simply future script updates
... and in doing so avoid a getting trapped by a rush to put in every
script update someone wants.

This is achieved by having the 'version' byte(s) at the start of the
witness program. If the witness program prefix is unrecognized it
means RETURN TRUE. This recaptures the behavior that seems to have
been intended by OP_VER in the earliest versions of the software, but
actually works instead of giving every user the power to hardfork the
system at any time. :) This escapes much of the risk in script
changes, as we no longer need to worry about negation, or other
interactions potentially breaking things. A new version flag can have
its whole design crafted as if it were being created on a clean slate.

Optimizing layout and such I think makes sense, but I think we should
consider any script enhancements completely off the table for SW;
otherwise the binding will delay deployment and increase complexity. I
want most of those things too (a couple I disagree with) and a few of
them we could do quite quickly-- but no need to bind them up; post SW
and esp with version bits we could deploy them quite rapidly and on
their own timeframes.
Multiplication and division may still considered to be risky and not very useful?
Operations like these make sense with fixed with types, when they are
over arbitrary bignums, they're a complexity nightmare... as
demonstrated by Bitcoin. :)


RE: OP_DUPTOALTSTACK yea, I've wanted that several times (really I've
been sad that there isn't just a stack flag on every manipulation
instruction).
Bryan Bishop via bitcoin-dev
2015-12-10 08:28:49 UTC
Permalink
3. SIGHASH_WITHINPUTVALUE [1]: there are many SIGHASH proposals but this one
has the highest priority as it makes offline signing much easier.
nhashtype proposal:
https://github.com/scmorse/bitcoin-misc/blob/master/sighash_proposal.md

OP_CODESEPARATOR:
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-April/007802.html

summary email about sighash type proposals (which IIRC you saw, so
leaving this link here is mainly for the benefit of others):
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/010759.html

- Bryan
http://heybryan.org/
1 512 203 0507
Gregory Maxwell via bitcoin-dev
2015-12-10 09:51:23 UTC
Permalink
On Thu, Dec 10, 2015 at 6:47 AM, jl2012--- via bitcoin-dev
4. Sum of fee, sigopcount, size etc as part of the witness hash tree: for
I should have also commented on this: the block can indicate how many
sum criteria there are; and then additional ones could be soft-forked
in. Haven't tried implementing it yet, but there you go. :)
jl2012--- via bitcoin-dev
2015-12-13 15:25:10 UTC
Permalink
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

I'm trying to list the minimal consensus rule changes needed for segwit
softfork. The list does not cover the changes in non-consensus critical
behaviors, such as relay of witness data.

1. OP_NOP4 is renamed as OP_SEGWIT
2. A script with OP_SEGWIT must fail if the scriptSig is not completely
empty
3. If OP_SEGWIT is used in the scriptPubKey, it must be the only and the
last OP code in the scriptPubKey, or the script must fail
4. The OP_SEGWIT must be preceded by exactly one data push (the
"serialized script") with at least one byte, or the script must fail
5. The most significant byte of serialized script is the version byte,
an unsigned number
6. If the version byte is 0x00, the script must fail
7. If the version byte is 0x02 to 0xff, the rest of the serialized
script is ignored and the output is spendable with any form of witness
(even if the witness contains something invalid in the current script
system, e.g. OP_RETURN)
8. If the version byte is 0x01,
8a. rest of the serialized script is deserialized, and be interpreted as
the scriptPubKey.
8b. the witness is interpreted as the scriptSig.
8c. the script runs with existing rules (including P2SH)
9. If the script fails when OP_SEGWIT is interpreted as a NOP, the
script must fail. However, this is guaranteed by the rules 2, 3, 4, 6 so
no additional check is needed.
10. The calculation of Merkle root in the block header remains unchanged
11. The witness commitment is placed somewhere in the block, either in
coinbase or an OP_RETURN output in a specific tx

Format of the witness commitment:
The witness commitment could be as simple as a hash tree of all witness
in a block. However, this is bad for further development of sum tree for
compact SPV fraud proof as we have to maintain one more tree in the
future. Even if we are not going to implement any sum checking in first
version of segwit, it's better to make it easier for future softforks.
(credit: gmaxwell)
12. The block should indicate how many sum criteria there are by
committing the number following the witness commitment
13. The witness commitment is a hash-sum tree with the number of sum
criteria committed in rule 12
14. Each sum criterion is a fixed 8 byte signed number (Negative value
is allowed for use like counting delta-UTXO. 8 bytes is needed for fee
commitment. Multiple smaller criteria may share a 8 byte slot, as long
as they do not allow negative value)
15. Nodes will ignore the sum criteria that they do not understand, as
long as the sum is correctly calculated

Size limit:
16. We need to determine the size limit of witness
17. We also need to have an upper limit for the number of sum criteria,
or a malicious miner may find a block with many unknown sum criteria and
feed unlimited amount of garbage to other nodes.

All other functions I mentioned in my wish list could be softforked
later to this system

To mitigate the risk described in rule 17, we may ask miners to vote for
an increase (but not decrease) in the number of sum criteria. Initially
there should be 0 sum criteria. If we would like to introduce a new
criteria, miners will support the proposal by setting the number of sum
criteria as 1. However, until 95% of miners support the increase, all
values of the extra sum criteria must be 0. Therefore, even a malicious
miner may declare any amount of sum criteria, those criteria unknown to
the network must be 0, and no extra data is needed to construct the
block. This voting mechanism could be a softfork over rule 12 and 13,
and is not necessary in the initial segwit deployment.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2

iQGcBAEBCAAGBQJWbY0jAAoJEO6eVSA0viTSU1oMAJIrYWerwk84poZBL/ezEsIp
9fCLnFZ4lyO2ijAm5UmwLXGijY03kqp29b0zmyIWV2WuoeW2lN64tLHQRilT0+5R
n5/viQOMv0C0MYs525+/dpNkk2q2MiFmyyozdbU6zcyfdkrkYdChCFOQ9GsxzQHk
n4lL4/RSKdqJZg4x2yEGgdyKA6XrQHaFirdr/K2bhhbk4Q0SOuYjy8Wxa2oCHFCC
WG4K2NBnKCI7DuVXQK+ZC8dYXMwbemeFfPHY6dZVti7j/OFsllyxno/CFKO3rsCs
+uko4XJk6pH0Ncjrc1n0l0v9xIKF5hTqSxFs+GVvhkiBdTDZVe7CdedO9qJWf1hE
bbmLXTURCDQUFe9F3uKsnYfMoD5eniWHx2OQcJcNPlLMJd9zObB3HdgFMW6N53KN
QXLmxobU/xFhmFknz1ShGEIdGSaH0gqnb+WEkO5v5vBO4L6Cikc+lcp7zXqQzWpW
uqm3QSrbKcbR6JEwDFoGQpDkcqpwpTIrOAk4B1jJRg==
=J2KF
-----END PGP SIGNATURE-----
Post by Gregory Maxwell via bitcoin-dev
On Thu, Dec 10, 2015 at 6:47 AM, jl2012--- via bitcoin-dev
4. Sum of fee, sigopcount, size etc as part of the witness hash tree: for
I should have also commented on this: the block can indicate how many
sum criteria there are; and then additional ones could be soft-forked
in. Haven't tried implementing it yet, but there you go. :)
Pieter Wuille via bitcoin-dev
2015-12-13 18:07:01 UTC
Permalink
On Sun, Dec 13, 2015 at 4:25 PM, jl2012--- via bitcoin-dev
Post by jl2012--- via bitcoin-dev
I'm trying to list the minimal consensus rule changes needed for segwit
softfork. The list does not cover the changes in non-consensus critical
behaviors, such as relay of witness data.
1. OP_NOP4 is renamed as OP_SEGWIT
2. A script with OP_SEGWIT must fail if the scriptSig is not completely
empty
3. If OP_SEGWIT is used in the scriptPubKey, it must be the only and the
last OP code in the scriptPubKey, or the script must fail
4. The OP_SEGWIT must be preceded by exactly one data push (the "serialized
script") with at least one byte, or the script must fail
The use of a NOP opcode to indicate a witness script was something I
considered at first too, but it's not really needed. You wouldn't be
able to use that opcode in any place a normal opcode could occur, as
it needs to be able to inspect the full scriptSig (rather than just
its resulting stack) anyway. So both in practice and conceptually it
is only really working as a template that gets assigned a special
meaning (like P2SH did). We don't need an opcode for that, and instead
we could say that any scriptPubKey (or redeemscript) that consists of
a single push is a witness program.
Post by jl2012--- via bitcoin-dev
5. The most significant byte of serialized script is the version byte, an
unsigned number
6. If the version byte is 0x00, the script must fail
What is that good for?
Post by jl2012--- via bitcoin-dev
7. If the version byte is 0x02 to 0xff, the rest of the serialized script is
ignored and the output is spendable with any form of witness (even if the
witness contains something invalid in the current script system, e.g.
OP_RETURN)
Do you mean the scriptPubKey itself, or the script that follows after
the version byte?
* The scriptPubKey itself: that's in contradiction with your rule 4,
as segwit scripts are by definition only a push (+ opcode), so they
can't be an OP_RETURN.
* The script after the version byte: agree - though it doesn't
actually need to be a script at all even (see further).
Post by jl2012--- via bitcoin-dev
8. If the version byte is 0x01,
8a. rest of the serialized script is deserialized, and be interpreted as the
scriptPubKey.
8b. the witness is interpreted as the scriptSig.
8c. the script runs with existing rules (including P2SH)
I don't think it's very useful to allow P2SH inside segwit, as we can
actually do better and allow segwit scripts to push the (perhaps 256
bit) hash of the redeemscript in the scriptPubKey, and have the full
redeemscript in the witness. See further for details. The numbers I
showed in the presentation were created using a simulation that used
that model already.

It is useful however to allow segwit inside P2SH (so the witness
program including version byte goes into the redeemscript, inside the
scriptSig). This allows old clients to send to new wallets without any
modifications (at slightly lower efficiency). The rules in this case
must say that the scriptSig is exactly a push of the redeemscript
(which itself contains the witness program), to provide both
compatibility with old consensus rules and malleability protection.

So let me summarize by giving an equivalent to your list above,
reflecting how my current prototype works:
A) A scriptPubKey or P2SH redeemscript that consists of a single push
of 2 to 41 bytes gets a new special meaning, and the byte vector
pushed by it is called the witness program.
A.1) In case the scriptPubKey pushes a witness program directly, the
scriptSig must be exactly empty.
A.2) In case the redeemscript pushes a witness program, the scriptSig
must be exactly the single push of the redeemscript.
B) The first byte of a witness program is the version byte.
B.1) If the witness version byte is 0, the rest of the witness program
is the actual script, which is executed after normal script evaluation
but with data from the witness rather than the scriptSig. The program
must not fail and result in a single TRUE on the stack, and nothing
else (to prevent stuffing the witness with pointless data during relay
of transactions).
B.2) if the witness version byte is 1, the rest of the witness program
must be 32 bytes, and a SHA256 hash of the actual script. The witness
must consist of an input stack to feed to the program, followed by the
serialized program itself (whose hash must match the hash pushed in
the witness program). It is executed after normal script evluation,
and must not fail and result in a single TRUE on the stack, and
nothing else.
B.3) if the witness version byte is 2 or higher, no further
interpretation of the data happens, but can be softforked in.

I'll write a separate mail on the block commitment structure.
--
Pieter
jl2012--- via bitcoin-dev
2015-12-13 18:41:44 UTC
Permalink
Post by Pieter Wuille via bitcoin-dev
The use of a NOP opcode to indicate a witness script was something I
considered at first too, but it's not really needed. You wouldn't be
able to use that opcode in any place a normal opcode could occur, as
it needs to be able to inspect the full scriptSig (rather than just
its resulting stack) anyway. So both in practice and conceptually it
is only really working as a template that gets assigned a special
meaning (like P2SH did). We don't need an opcode for that, and instead
we could say that any scriptPubKey (or redeemscript) that consists of
a single push is a witness program.
Post by jl2012--- via bitcoin-dev
5. The most significant byte of serialized script is the version byte, an
unsigned number
6. If the version byte is 0x00, the script must fail
What is that good for?
Just to make sure a script like OP_0 OP_SEGWIT will fail.

Anyway, your design may be better so forget it
Post by Pieter Wuille via bitcoin-dev
Post by jl2012--- via bitcoin-dev
7. If the version byte is 0x02 to 0xff, the rest of the serialized script is
ignored and the output is spendable with any form of witness (even if the
witness contains something invalid in the current script system, e.g.
OP_RETURN)
Do you mean the scriptPubKey itself, or the script that follows after
the version byte?
* The scriptPubKey itself: that's in contradiction with your rule 4,
as segwit scripts are by definition only a push (+ opcode), so they
can't be an OP_RETURN.
* The script after the version byte: agree - though it doesn't
actually need to be a script at all even (see further).
I am not referring to the serialized script, but the witness. Basically,
it doesn't care what the content look like.
Post by Pieter Wuille via bitcoin-dev
It is useful however to allow segwit inside P2SH
Agree
Post by Pieter Wuille via bitcoin-dev
So let me summarize by giving an equivalent to your list above,
A) A scriptPubKey or P2SH redeemscript that consists of a single push
of 2 to 41 bytes gets a new special meaning, and the byte vector
pushed by it is called the witness program.
Why 41 bytes? Do you expect all witness program to be P2SH-like?
Post by Pieter Wuille via bitcoin-dev
The program
must not fail and result in a single TRUE on the stack, and nothing
else (to prevent stuffing the witness with pointless data during relay
of transactions).
Could we just implement this as standardness rule? It is always possible
to stuff the scriptSig with pointless data so I don't think it's a new
attack vector. What if we want to include the height and tx index of
the input for compact fraud proof? Such fraud proof should not be an
opt-in function and not be dependent on the version byte

For the same reason, we should also allow traditional tx to have data
in the witness field, for any potential softfork upgrade
Tamas Blummer via bitcoin-dev
2015-12-10 12:54:30 UTC
Permalink
Note that the unused space in coin base input script allows us to soft-fork an additional SW Merkle tree root into the design,
therefore please make sure the new SW data structure also has a new slot for future extension.

Tamas Blummer
Jannes Faber via bitcoin-dev
2015-12-12 00:43:11 UTC
Permalink
Segregated IBLT

I was just wondering if it would make sense when we have SW to also make
Segregated IBLT? Segregating transactions from signatures and then tune the
parameters such that transactions have a slightly higher guarantee and save
a bit of space on the signatures side.

IBLT should of course, most of the time, convey all transactions _and_ all
signatures. However, in suboptimal situations, at least the receiving miner
will be more likely to have all the transactions, just possibly not all the
signatures.

Assuming the miner was already planning on SPV mining anyway, at least now
she knows which transactions to remove from her mempool, taking away an
excuse to mine an empty block. And she can still verify most of the
signatures too (whatever % could be recovered from the IBLT).

I guess this does not improve the worst adversarial case for IBLT block
propagation, but it should improve the effectiveness in cases where the
"normal" IBLT would fail to deliver all transactions. Transactions without
signatures is better than no transactions at all, for a miner that's eager
to start on the next block, right? In "optimal" cases it would reduce the
size of the IBLT.

Sorry if this was already suggested.


--
Jannes

On 10 December 2015 at 13:54, Tamas Blummer via bitcoin-dev <
Post by Tamas Blummer via bitcoin-dev
Note that the unused space in coin base input script allows us to
soft-fork an additional SW Merkle tree root into the design,
therefore please make sure the new SW data structure also has a new slot
for future extension.
Tamas Blummer
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Rusty Russell via bitcoin-dev
2015-12-13 20:34:48 UTC
Permalink
Post by Jannes Faber via bitcoin-dev
Segregated IBLT
I was just wondering if it would make sense when we have SW to also make
Segregated IBLT? Segregating transactions from signatures and then tune the
parameters such that transactions have a slightly higher guarantee and save
a bit of space on the signatures side.
It just falls out naturally. If the peer doesn't want the witnesses,
they don't get serialized into the IBLT.

Cheers,
Rusty.
Jonathan Toomim via bitcoin-dev
2015-12-14 11:44:34 UTC
Permalink
1. I think we should limit the sum of the block and witness data to nBlockMaxSize*7/4 per block, for a maximum of 1.75 MB total. I don't like the idea that SegWit would give us 1.75 MB of capacity in the typical case, but we have to have hardware capable of 4 MB in adversarial conditions (i.e. intentional multisig). I think a limit to the segwit size allays that concern.

2. I think that segwit is a substantial change to how Bitcoin works, and I very strongly believe that we should not rush this. It changes the block structure, it changes the transaction structure, it changes the network protocol, it changes SPV wallet software, it changes block explorers, and it has changes that affect most other parts of the Bitcoin ecosystem. After we decide to implement it, and have a final version of the code that will be merged, we should give developers of other Bitcoin software time to implement code that supports the new transaction/witness formats.

When you guys say "as soon as possible," what do you mean exactly?
Post by jl2012--- via bitcoin-dev
1-2 are about scalability and have highest priority
1. Witness size limit: with SW we should allow a bigger overall block size. It seems 2MB is considered to be safe for many people. However, the exact size and growth of block size should be determined based on testing and reasonable projection.
2. Deployment time frame: I prefer as soon as possible, even if none of the following new features are implemented. This is not only a technical issue but also a response to the community which has been waiting for a scaling solution for years
Adam Back via bitcoin-dev
2015-12-14 12:32:01 UTC
Permalink
I think people have a variety of sequences in mind, eg some would
prefer to deploy versionBits first, so that subsequent features can be
deployed in parallel (currently soft-fork deployment is necessarily
serialised).

Others think do seg-witness first because scale is more important.

Some want to do seg-witness as a hard-fork (personally I think that
would take a bit longer to deploy & activate - the advantage of
soft-fork is that it's lower risk and we have more experience of it).

I've seen a few people want to do BIP 102 first (straight move to 2MB
and only that) and then do seg-witness and other scaling work later.
That's possible also and before Luke observed that you could do a
seg-witness based block-size increase via soft-fork, people had been
working following the summary from the montreal workshop discussion
posted on this list about a loose plan of action, people had been
working on something like BIP 102 to 2-4-8 kind of space, plus
validation cost accounting.

So I think personally soft-fork seg-witness first, but hey I'm not
writing code at present and I'm very happy for wiser and more code and
deployment detail aware minds to figure out the best deployment
strategy, I wouldnt mind any of the above, just think seg-witness
soft-fork is the safest and fastest. The complexity risk - well on
the plus side it is implemented and it reduces deployment risk, and
it's anyway needed work to have a robust malleability fix which is
needed for a whole range of bitcoin smart-contract, and scaling
features, including for example greenAddress like faster transactions
as used by BitPay?, BitGo and GreenAddress as well as lightning
related proposals and basically any smart-contract use that involves
multiple clauses and dependent transactions. Also re complexity risk
Greg has highlighted that the complexity and overhead difference is
really minor. About knock on code changes needed, a bunch of the next
steps for Bitcoin are going to need code changes, I think our scope to
improve and scale Bitcoin are going to be severely hampered if we
restricted ourselves with the pre-condition that we cant make protocol
improvements. I think people in core would be happy to, and have done
this kind of thing multiple times in the past, to help people for free
on volunteer time integrate and fix up related code in various
languages and FOSS and commercial software that is affected.

As to time-line Luke I saw guestimated by march 2016 on reddit.
Others maybe be more or less conservative. Probably a BIP and testing
are the main thing, and that can be helped by more eyes. The one
advantage of BIP 102 like proposal is simplicity if one had to do a
more emergency hard-fork. Maybe companies and power users, miners and
pool operators could help define some requirements and metrics about
an industry wide service level they are receiving relative to fees.

The other thing which is not protocol related, is that companies can
help themselves and help Bitcoin developers help them, by working to
improve decentralisation with better configurations, more use of
self-hosted and secured full nodes, and decentralisation of policy
control over hashrate. That might even include buying a nominal (to a
reasonably funded startup) amount of mining equipment. Or for power
users to do more of that. Some developers are doing mining.
Blockstream and some employees have a little bit of hashrate. If we
could define some metrics and best practices and measure the
improvements, that would maybe reduce miners concerns about
centralisation risk and allow a bigger block faster, alongside the
IBLT & weak block network protocol improvements.

Adam

On 14 December 2015 at 19:44, Jonathan Toomim via bitcoin-dev
Post by Jonathan Toomim via bitcoin-dev
1. I think we should limit the sum of the block and witness data to nBlockMaxSize*7/4 per block, for a maximum of 1.75 MB total. I don't like the idea that SegWit would give us 1.75 MB of capacity in the typical case, but we have to have hardware capable of 4 MB in adversarial conditions (i.e. intentional multisig). I think a limit to the segwit size allays that concern.
2. I think that segwit is a substantial change to how Bitcoin works, and I very strongly believe that we should not rush this. It changes the block structure, it changes the transaction structure, it changes the network protocol, it changes SPV wallet software, it changes block explorers, and it has changes that affect most other parts of the Bitcoin ecosystem. After we decide to implement it, and have a final version of the code that will be merged, we should give developers of other Bitcoin software time to implement code that supports the new transaction/witness formats.
When you guys say "as soon as possible," what do you mean exactly?
Post by jl2012--- via bitcoin-dev
1-2 are about scalability and have highest priority
1. Witness size limit: with SW we should allow a bigger overall block size. It seems 2MB is considered to be safe for many people. However, the exact size and growth of block size should be determined based on testing and reasonable projection.
2. Deployment time frame: I prefer as soon as possible, even if none of the following new features are implemented. This is not only a technical issue but also a response to the community which has been waiting for a scaling solution for years
Jonathan Toomim via bitcoin-dev
2015-12-14 12:50:51 UTC
Permalink
Off-topic: If you want to decentralize hashing, the best solution is probably to redesign p2pool to use DAGs. p2pool would be great except for the fact that the 30 sec share times are (a) long enough to cause significant reward variance for miners, but (b) short enough to cause hashrate loss from frequent switching on hardware that wasn't designed for it (e.g. Antminers, KNC) and (c) uneven rewards to different miners due to share orphan rates. DAGs can fix all of those issues. I had a talk with some medium-sized Chinese miners on Thursday in which I told them about p2pool, and I got the impression that they would prefer it over their existing pools due to the 0% fees and trustless design if the performance issues were fixed. If anybody is interested in helping with this work, ping me or Bob McElrath backchannel to be included in our conversation.
Post by Adam Back via bitcoin-dev
The other thing which is not protocol related, is that companies can
help themselves and help Bitcoin developers help them, by working to
improve decentralisation with better configurations, more use of
self-hosted and secured full nodes, and decentralisation of policy
control over hashrate. That might even include buying a nominal (to a
reasonably funded startup) amount of mining equipment. Or for power
users to do more of that. Some developers are doing mining.
Blockstream and some employees have a little bit of hashrate. If we
could define some metrics and best practices and measure the
improvements, that would maybe reduce miners concerns about
centralisation risk and allow a bigger block faster, alongside the
IBLT & weak block network protocol improvements.
Continue reading on narkive:
Loading...