Discussion:
[bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)
t. khan via bitcoin-dev
2016-12-05 15:27:32 UTC
Permalink
BIP Proposal - Managing Bitcoin’s block size the same way we do difficulty
(aka Block75)

The every two-week adjustment of difficulty has proven to be a reasonably
effective and predictable way of managing how quickly blocks are mined.
Bitcoin needs a reasonably effective and predictable way of managing the
maximum block size.

It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in deciding
the difficulty.

Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should be
adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.

Put another way: let’s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
‘Block75’.

The target capacity over 2016 blocks would be 75%. If the last 2016 blocks
are more than 75% full, add the difference to the max block size. Like this:

MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY

To check if a block is valid, ≀ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)

For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be 1,100
KB until the next 2016 blocks are mined, then reset and recalculate. The
1,000,000 byte limit that exists currently would remain, but would
effectively be the minimum max block size.

Another two weeks goes by, the last 2016 blocks are again 85% full, but now
that means they average 935 KB out of the 1,100 KB max block size. This is
93.5% of the 1,000,000 byte limit, so 18.5% would be added to that to make
the new max block size of 1,185 KB.

Another two weeks passes. This time, the average block is 1,050 KB. The new
max block size is calculated to 1,300 KB (as blocks were 105% full, minus
the 75% capacity target, so 30% added to max block size).

Repeat every 2016 blocks, forever.

If Block75 had been applied at the difficulty adjustment on November 18th,
the max block size would have been 1,080KB, as the average block during
that period was 83% full, so 8% is added to the 1,000KB limit. The current
size, after the December 2nd adjustment would be 1,150K.

Block75 would allow the max block size to grow (or shrink) in response to
transaction volume, and does so predictably, reasonably quickly, and in a
method that prevents wild swings in block size or transaction fees. It
attempts to keep blocks at 75% total capacity over each two week period,
the same way difficulty tries to keep blocks mined every ten minutes. It
also keeps blocks as small as possible.

Thoughts?

-t.k.
s7r via bitcoin-dev
2016-12-10 10:44:31 UTC
Permalink
Post by t. khan via bitcoin-dev
BIP Proposal - Managing Bitcoin’s block size the same way we do
difficulty (aka Block75)
The every two-week adjustment of difficulty has proven to be a
reasonably effective and predictable way of managing how quickly blocks
are mined. Bitcoin needs a reasonably effective and predictable way of
managing the maximum block size.
It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in
deciding the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should
be adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let’s stop thinking about what the max block size
should be and start thinking about how full we want the average block to
be regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this
concept ‘Block75’.
The target capacity over 2016 blocks would be 75%. If the last 2016
blocks are more than 75% full, add the difference to the max block size.
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ≀ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be
1,100 KB until the next 2016 blocks are mined, then reset and
recalculate. The 1,000,000 byte limit that exists currently would
remain, but would effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but
now that means they average 935 KB out of the 1,100 KB max block size.
This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
that to make the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The
new max block size is calculated to 1,300 KB (as blocks were 105% full,
minus the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November
18th, the max block size would have been 1,080KB, as the average block
during that period was 83% full, so 8% is added to the 1,000KB limit.
The current size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response
to transaction volume, and does so predictably, reasonably quickly, and
in a method that prevents wild swings in block size or transaction fees.
It attempts to keep blocks at 75% total capacity over each two week
period, the same way difficulty tries to keep blocks mined every ten
minutes. It also keeps blocks as small as possible.
Thoughts?
-t.k.
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.

The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
Hampus Sjöberg via bitcoin-dev
2016-12-10 12:05:22 UTC
Permalink
While disk space requirements might not be a big problem, block
propagation time is

Is block propagation time really still a problem? Compact blocks and FIBRE
should help here.
Bitcoin, because its fundamental design, can scale by using offchain
solutions.

I agree.
However, I believe that on-chain scaling will be needed regardless of which
off-chain solution gains popularity.

2016-12-10 11:44 GMT+01:00 s7r via bitcoin-dev <
Post by t. khan via bitcoin-dev
BIP Proposal - Managing Bitcoin’s block size the same way we do
difficulty (aka Block75)
The every two-week adjustment of difficulty has proven to be a
reasonably effective and predictable way of managing how quickly blocks
are mined. Bitcoin needs a reasonably effective and predictable way of
managing the maximum block size.
It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in
deciding the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should
be adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let’s stop thinking about what the max block size
should be and start thinking about how full we want the average block to
be regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this
concept ‘Block75’.
The target capacity over 2016 blocks would be 75%. If the last 2016
blocks are more than 75% full, add the difference to the max block size.
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ≀ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be
1,100 KB until the next 2016 blocks are mined, then reset and
recalculate. The 1,000,000 byte limit that exists currently would
remain, but would effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but
now that means they average 935 KB out of the 1,100 KB max block size.
This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
that to make the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The
new max block size is calculated to 1,300 KB (as blocks were 105% full,
minus the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November
18th, the max block size would have been 1,080KB, as the average block
during that period was 83% full, so 8% is added to the 1,000KB limit.
The current size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response
to transaction volume, and does so predictably, reasonably quickly, and
in a method that prevents wild swings in block size or transaction fees.
It attempts to keep blocks at 75% total capacity over each two week
period, the same way difficulty tries to keep blocks mined every ten
minutes. It also keeps blocks as small as possible.
Thoughts?
-t.k.
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.
The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
t. khan via bitcoin-dev
2016-12-11 00:26:01 UTC
Permalink
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the Block75
system. Even if it were attempted and assuming the goal was to create
bigger blocks, the maximum possible increase would be 25% over the previous
block size. And, that size would only last for two weeks before readjusting
down. It would cost them more in transaction fees to stuff the network than
they could ever make up. To game the system, they'd have to game it forever
with no possibility of profit.

Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly more
orphaned blocks.

The frequent and small adjustments made by Block75 have the added benefit
of being more easily adapted to, both psychologically and technologically,
with regards to miners/node operators.

-t.k

On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev <
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
BIP Proposal - Managing Bitcoin’s block size the same way we do
difficulty (aka Block75)
The every two-week adjustment of difficulty has proven to be a
reasonably effective and predictable way of managing how quickly blocks
are mined. Bitcoin needs a reasonably effective and predictable way of
managing the maximum block size.
It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in
deciding the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should
be adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let’s stop thinking about what the max block size
should be and start thinking about how full we want the average block to
be regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this
concept ‘Block75’.
The target capacity over 2016 blocks would be 75%. If the last 2016
blocks are more than 75% full, add the difference to the max block size.
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ≀ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be
1,100 KB until the next 2016 blocks are mined, then reset and
recalculate. The 1,000,000 byte limit that exists currently would
remain, but would effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but
now that means they average 935 KB out of the 1,100 KB max block size.
This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
that to make the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The
new max block size is calculated to 1,300 KB (as blocks were 105% full,
minus the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November
18th, the max block size would have been 1,080KB, as the average block
during that period was 83% full, so 8% is added to the 1,000KB limit.
The current size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response
to transaction volume, and does so predictably, reasonably quickly, and
in a method that prevents wild swings in block size or transaction fees.
It attempts to keep blocks at 75% total capacity over each two week
period, the same way difficulty tries to keep blocks mined every ten
minutes. It also keeps blocks as small as possible.
Thoughts?
-t.k.
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.
The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
James Hilliard via bitcoin-dev
2016-12-11 00:40:25 UTC
Permalink
Miners in general are naturally incentivized to always mine max size
blocks to maximize transaction fees simply because there is very
little marginal cost to including extra transactions(there will always
be a transaction backlog of some sort available to mine since demand
for block space is effectively unbounded as fees approach 0 and they
can even mine their own transactions without any fees). This proposal
would almost certainly cause runaway block size growth and encourage
much more miner centralization.

On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
Post by t. khan via bitcoin-dev
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the Block75
system. Even if it were attempted and assuming the goal was to create bigger
blocks, the maximum possible increase would be 25% over the previous block
size. And, that size would only last for two weeks before readjusting down.
It would cost them more in transaction fees to stuff the network than they
could ever make up. To game the system, they'd have to game it forever with
no possibility of profit.
Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly more
orphaned blocks.
The frequent and small adjustments made by Block75 have the added benefit of
being more easily adapted to, both psychologically and technologically, with
regards to miners/node operators.
-t.k
On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
Post by s7r via bitcoin-dev
BIP Proposal - Managing Bitcoin’s block size the same way we do
difficulty (aka Block75)
The every two-week adjustment of difficulty has proven to be a
reasonably effective and predictable way of managing how quickly blocks
are mined. Bitcoin needs a reasonably effective and predictable way of
managing the maximum block size.
It’s clear at this point that human beings should not be involved in the
determination of max block size, just as they’re not involved in
deciding the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should
be adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let’s stop thinking about what the max block size
should be and start thinking about how full we want the average block to
be regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this
concept ‘Block75’.
The target capacity over 2016 blocks would be 75%. If the last 2016
blocks are more than 75% full, add the difference to the max block size.
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ≤ (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be
1,100 KB until the next 2016 blocks are mined, then reset and
recalculate. The 1,000,000 byte limit that exists currently would
remain, but would effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but
now that means they average 935 KB out of the 1,100 KB max block size.
This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
that to make the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The
new max block size is calculated to 1,300 KB (as blocks were 105% full,
minus the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November
18th, the max block size would have been 1,080KB, as the average block
during that period was 83% full, so 8% is added to the 1,000KB limit.
The current size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response
to transaction volume, and does so predictably, reasonably quickly, and
in a method that prevents wild swings in block size or transaction fees.
It attempts to keep blocks at 75% total capacity over each two week
period, the same way difficulty tries to keep blocks mined every ten
minutes. It also keeps blocks as small as possible.
Thoughts?
-t.k.
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.
The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Bram Cohen via bitcoin-dev
2016-12-11 01:07:06 UTC
Permalink
Miners individually have an incentive to include every transaction they can
when they mine a block, but they also sometimes have an incentive to
collectively cooperate to reduce throughput to make more money as a group.
Under schemes where limits can be adjusted both possibilities must be taken
into account.

On Sat, Dec 10, 2016 at 4:40 PM, James Hilliard via bitcoin-dev <
Post by James Hilliard via bitcoin-dev
Miners in general are naturally incentivized to always mine max size
blocks to maximize transaction fees simply because there is very
little marginal cost to including extra transactions(there will always
be a transaction backlog of some sort available to mine since demand
for block space is effectively unbounded as fees approach 0 and they
can even mine their own transactions without any fees). This proposal
would almost certainly cause runaway block size growth and encourage
much more miner centralization.
On Sat, Dec 10, 2016 at 6:26 PM, t. khan via bitcoin-dev
Post by t. khan via bitcoin-dev
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the Block75
system. Even if it were attempted and assuming the goal was to create
bigger
Post by t. khan via bitcoin-dev
blocks, the maximum possible increase would be 25% over the previous
block
Post by t. khan via bitcoin-dev
size. And, that size would only last for two weeks before readjusting
down.
Post by t. khan via bitcoin-dev
It would cost them more in transaction fees to stuff the network than
they
Post by t. khan via bitcoin-dev
could ever make up. To game the system, they'd have to game it forever
with
Post by t. khan via bitcoin-dev
no possibility of profit.
Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly
more
Post by t. khan via bitcoin-dev
orphaned blocks.
The frequent and small adjustments made by Block75 have the added
benefit of
Post by t. khan via bitcoin-dev
being more easily adapted to, both psychologically and technologically,
with
Post by t. khan via bitcoin-dev
regards to miners/node operators.
-t.k
On Sat, Dec 10, 2016 at 5:44 AM, s7r via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
BIP Proposal - Managing Bitcoin’s block size the same way we do
difficulty (aka Block75)
The every two-week adjustment of difficulty has proven to be a
reasonably effective and predictable way of managing how quickly
blocks
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
are mined. Bitcoin needs a reasonably effective and predictable way of
managing the maximum block size.
It’s clear at this point that human beings should not be involved in
the
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
determination of max block size, just as they’re not involved in
deciding the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.)
or
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
passing the decision to miners/pool operators, the max block size
should
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
be adjusted every two weeks (2016 blocks) using a system similar to
how
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
difficulty is calculated.
Put another way: let’s stop thinking about what the max block size
should be and start thinking about how full we want the average block
to
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
be regardless of size. Over the last year, we’ve had averages of 75%
or
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
higher, so aiming for 75% full seems reasonable, hence naming this
concept ‘Block75’.
The target capacity over 2016 blocks would be 75%. If the last 2016
blocks are more than 75% full, add the difference to the max block
size.
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ≀ (MAX_BLOCK_BASE_SIZE +
AVERAGE_OVER_CAP)
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
For example, if the last 2016 blocks are 85% full (average block is
850
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
KB), add 10% to the max block size. The new max block size would be
1,100 KB until the next 2016 blocks are mined, then reset and
recalculate. The 1,000,000 byte limit that exists currently would
remain, but would effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full,
but
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
now that means they average 935 KB out of the 1,100 KB max block size.
This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
that to make the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB.
The
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
new max block size is calculated to 1,300 KB (as blocks were 105%
full,
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
minus the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November
18th, the max block size would have been 1,080KB, as the average block
during that period was 83% full, so 8% is added to the 1,000KB limit.
The current size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response
to transaction volume, and does so predictably, reasonably quickly,
and
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
in a method that prevents wild swings in block size or transaction
fees.
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
Post by t. khan via bitcoin-dev
It attempts to keep blocks at 75% total capacity over each two week
period, the same way difficulty tries to keep blocks mined every ten
minutes. It also keeps blocks as small as possible.
Thoughts?
-t.k.
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.
The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
s7r via bitcoin-dev
2016-12-11 17:11:10 UTC
Permalink
Post by t. khan via bitcoin-dev
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the
Block75 system. Even if it were attempted and assuming the goal was to
create bigger blocks, the maximum possible increase would be 25% over
the previous block size. And, that size would only last for two weeks
before readjusting down. It would cost them more in transaction fees to
stuff the network than they could ever make up. To game the system,
they'd have to game it forever with no possibility of profit.
This is an incentive, if few miners agree to create a large conglomerate
that will ultimately control the network.

You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks.
Post by t. khan via bitcoin-dev
Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly
more orphaned blocks.
Topology and bandwidth speed / hash rate of the network cannot be
controlled - if we make assumptions about these it might have terrible
consequences.

Even if we take in consideration that bandwidth will only grow and disk
space will only cost less (which is not something we can safely assume,
by the way) the hard limit max. block size cannot grow to unlimited
value (even if the growth happens over time). There is also a validation
cost in time for each block, for the health of the network any node
should be able to download _and_ validate a block, before next block
gets mined.

You said in another post that a permanent solution is preferred, rather
than kicking the can down the road. I fully agree, as well as many
others reading this list, but the permanent solution doesn't necessarily
have to be increasing the max block size dynamically.

If you think about it the other way around, dynamically growing the max
block size is also kicking the can down the road ... just without having
to touch it and get dust on the boot ;)
t. khan via bitcoin-dev
2016-12-11 19:55:34 UTC
Permalink
Post by s7r via bitcoin-dev
This is an incentive, if few miners agree to create a large conglomerate
that will ultimately control the network.
You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks.
No, that wasn't overlooked. Miners could indeed stuff their own blocks for
free, but they can't stuff blocks mined by others for free.

In the hypothetical scenario where there is a single mining pool which
mines most (if not all) of the blocks, we would have much larger problems
than their ability to raise the max block size gradually. Even if they were
able to fill 100% of the blocks for an entire year, the max block size for
that 2016 block period would be 7.25MB (not accounting for SegWit). After
the whole year they would have made no extra profit vs doing nothing. And
as soon as they stopped this scheme, block size would spring back to it's
natural level.

The good news is, this scenario has never happened and even when we've come
remotely close (when ASICs first shipped), the situation was temporary. The
odds of this happening in the future and persisting long enough to have any
major effect with Block75 are very close to zero.
Post by s7r via bitcoin-dev
Topology and bandwidth speed / hash rate of the network cannot be
controlled - if we make assumptions about these it might have terrible
consequences.
Even if we take in consideration that bandwidth will only grow and disk
space will only cost less (which is not something we can safely assume,
by the way) the hard limit max. block size cannot grow to unlimited
value (even if the growth happens over time). There is also a validation
cost in time for each block, for the health of the network any node
should be able to download _and_ validate a block, before next block
gets mined.
You said in another post that a permanent solution is preferred, rather
than kicking the can down the road. I fully agree, as well as many
others reading this list, but the permanent solution doesn't necessarily
have to be increasing the max block size dynamically.
Increasing *and* decreasing max block size dynamically. Block75 is
self-correcting, whereas any solution with hardcoded limits can't correct
without human intervention and would rely on our ability to predict the
future (which as you pointed out, we can't do). Therefore, any solution
that's not dynamic cannot be permanent.

Additionally, the frequent and gradual changes in max block size would
allow us to see any consequences well in advance (years probably).
Post by s7r via bitcoin-dev
If you think about it the other way around, dynamically growing the max
block size is also kicking the can down the road ... just without having
to touch it and get dust on the boot ;)
Not having to touch it again = permanent solution. ;)

It would be helpful if some others would run the numbers on how Block75
would adjust the block size over time:

new max block size = 1000kb + (average block size over last 2016 blocks -
750kb)

-t.k.
James Hilliard via bitcoin-dev
2016-12-11 20:31:05 UTC
Permalink
What's most likely to happen is miners will max out the blocks they
mine simply to try and get as many transaction fees as possible like
they are doing right now(there will be a backlog of transactions at
any block size). Having the block size double every year would likely
cause major problems and this proposal allows over a 7x increase it
seems.

The main problem with this proposal I think is that users effectively
have no way to stop the miners from increasing block size
continuously.

On Sun, Dec 11, 2016 at 1:55 PM, t. khan via bitcoin-dev
Post by t. khan via bitcoin-dev
Post by s7r via bitcoin-dev
This is an incentive, if few miners agree to create a large conglomerate
that will ultimately control the network.
You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks.
No, that wasn't overlooked. Miners could indeed stuff their own blocks for
free, but they can't stuff blocks mined by others for free.
In the hypothetical scenario where there is a single mining pool which mines
most (if not all) of the blocks, we would have much larger problems than
their ability to raise the max block size gradually. Even if they were able
to fill 100% of the blocks for an entire year, the max block size for that
2016 block period would be 7.25MB (not accounting for SegWit). After the
whole year they would have made no extra profit vs doing nothing. And as
soon as they stopped this scheme, block size would spring back to it's
natural level.
The good news is, this scenario has never happened and even when we've come
remotely close (when ASICs first shipped), the situation was temporary. The
odds of this happening in the future and persisting long enough to have any
major effect with Block75 are very close to zero.
Post by s7r via bitcoin-dev
Topology and bandwidth speed / hash rate of the network cannot be
controlled - if we make assumptions about these it might have terrible
consequences.
Even if we take in consideration that bandwidth will only grow and disk
space will only cost less (which is not something we can safely assume,
by the way) the hard limit max. block size cannot grow to unlimited
value (even if the growth happens over time). There is also a validation
cost in time for each block, for the health of the network any node
should be able to download _and_ validate a block, before next block
gets mined.
You said in another post that a permanent solution is preferred, rather
than kicking the can down the road. I fully agree, as well as many
others reading this list, but the permanent solution doesn't necessarily
have to be increasing the max block size dynamically.
Increasing *and* decreasing max block size dynamically. Block75 is
self-correcting, whereas any solution with hardcoded limits can't correct
without human intervention and would rely on our ability to predict the
future (which as you pointed out, we can't do). Therefore, any solution
that's not dynamic cannot be permanent.
Additionally, the frequent and gradual changes in max block size would allow
us to see any consequences well in advance (years probably).
Post by s7r via bitcoin-dev
If you think about it the other way around, dynamically growing the max
block size is also kicking the can down the road ... just without having
to touch it and get dust on the boot ;)
Not having to touch it again = permanent solution. ;)
It would be helpful if some others would run the numbers on how Block75
new max block size = 1000kb + (average block size over last 2016 blocks -
750kb)
-t.k.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
t. khan via bitcoin-dev
2016-12-11 21:40:21 UTC
Permalink
Post by James Hilliard via bitcoin-dev
What's most likely to happen is miners will max out the blocks they
mine simply to try and get as many transaction fees as possible like
they are doing right now(there will be a backlog of transactions at
any block size). Having the block size double every year would likely
cause major problems and this proposal allows over a 7x increase it
seems.
Block75 is not exponential scaling. It's true the max theoretical increase
in the first year would be 7x, but the next year would be a max of 2x, and
the next could only increase by 50% and so on.

However, to reach the max in the first year: 1) ALL blocks would have to be
100% full and 2) transactions would have to increase at the same rate. We'd
have to be doing 2.1 million transactions a day within a year to make that
happen, and would therefore need blocks to be that big.

Realistically, max block size will grow (and shrink) at a much slower rate
... even more so with SegWit.
Post by James Hilliard via bitcoin-dev
The main problem with this proposal I think is that users effectively
have no way to stop the miners from increasing block size
Post by James Hilliard via bitcoin-dev
continuously.
Yes they could, simply by not sending transactions. Users don't care at all
about block size. They just want their transactions to be fast and
relatively cheap.

-t.k.
Bram Cohen via bitcoin-dev
2016-12-11 21:53:46 UTC
Permalink
On Sun, Dec 11, 2016 at 1:40 PM, t. khan via bitcoin-dev <
Post by t. khan via bitcoin-dev
Block75 is not exponential scaling. It's true the max theoretical increase
in the first year would be 7x, but the next year would be a max of 2x, and
the next could only increase by 50% and so on.
With those limits there's very little reason to not simply have a fixed
schedule. Blocks are likely to all be full in the future anyway, with a
real fee market, and the idea that miners will be held back on block sizes
for worry about propagation delay is a myth, and even if it were true it
would favor collective pooling a lot, which would be a very bad thing.
James Hilliard via bitcoin-dev
2016-12-11 21:55:55 UTC
Permalink
I think the main thing you're missing is that there will always be
transactions available to mine simply because demand for blockspace is
effectively unbounded as fees approach 0. Nodes generally have a
static mempool size and dynamic minrelaytxfee nowadays so as
transactions get mined lower fee transactions get accepted into the
mempool. An individual opting to not send a transaction would not make
the blocks smaller simply because there will always be other
transactions available(it would really only have an effect on the
transaction fees needed to get mined).
Post by t. khan via bitcoin-dev
Post by James Hilliard via bitcoin-dev
What's most likely to happen is miners will max out the blocks they
mine simply to try and get as many transaction fees as possible like
they are doing right now(there will be a backlog of transactions at
any block size). Having the block size double every year would likely
cause major problems and this proposal allows over a 7x increase it
seems.
Block75 is not exponential scaling. It's true the max theoretical increase
in the first year would be 7x, but the next year would be a max of 2x, and
the next could only increase by 50% and so on.
However, to reach the max in the first year: 1) ALL blocks would have to be
100% full and 2) transactions would have to increase at the same rate. We'd
have to be doing 2.1 million transactions a day within a year to make that
happen, and would therefore need blocks to be that big.
Realistically, max block size will grow (and shrink) at a much slower rate
... even more so with SegWit.
Post by James Hilliard via bitcoin-dev
The main problem with this proposal I think is that users effectively
have no way to stop the miners from increasing block size
continuously.
Yes they could, simply by not sending transactions. Users don't care at all
about block size. They just want their transactions to be fast and
relatively cheap.
-t.k.
t. khan via bitcoin-dev
2016-12-11 22:30:34 UTC
Permalink
The assumption you're making is incorrect. There is not an infinite number
of low-fee transactions.

Yes, the average fee will go down compared to today with Block75, but this
will balance itself between demand and the minimum fee miners are willing
to accept (not zero).

For example, add 200kb to today's max block size. How does that affect fees?
(200kb would likely be the first increase if Block75 activated today)

-t.k.
Post by James Hilliard via bitcoin-dev
I think the main thing you're missing is that there will always be
transactions available to mine simply because demand for blockspace is
effectively unbounded as fees approach 0. Nodes generally have a
static mempool size and dynamic minrelaytxfee nowadays so as
transactions get mined lower fee transactions get accepted into the
mempool. An individual opting to not send a transaction would not make
the blocks smaller simply because there will always be other
transactions available(it would really only have an effect on the
transaction fees needed to get mined).
On Sun, Dec 11, 2016 at 3:31 PM, James Hilliard <
Post by James Hilliard via bitcoin-dev
What's most likely to happen is miners will max out the blocks they
mine simply to try and get as many transaction fees as possible like
they are doing right now(there will be a backlog of transactions at
any block size). Having the block size double every year would likely
cause major problems and this proposal allows over a 7x increase it
seems.
Block75 is not exponential scaling. It's true the max theoretical
increase
in the first year would be 7x, but the next year would be a max of 2x,
and
the next could only increase by 50% and so on.
However, to reach the max in the first year: 1) ALL blocks would have to
be
100% full and 2) transactions would have to increase at the same rate.
We'd
have to be doing 2.1 million transactions a day within a year to make
that
happen, and would therefore need blocks to be that big.
Realistically, max block size will grow (and shrink) at a much slower
rate
... even more so with SegWit.
Post by James Hilliard via bitcoin-dev
The main problem with this proposal I think is that users effectively
have no way to stop the miners from increasing block size
continuously.
Yes they could, simply by not sending transactions. Users don't care at
all
about block size. They just want their transactions to be fast and
relatively cheap.
-t.k.
Andrew Johnson via bitcoin-dev
2016-12-11 20:38:27 UTC
Permalink
"You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks."

This is easily detectable as long as the network isn't heavily
partitioned(which is an assumption we make today in order for transaction
propagation to work reliably as well as for xThin and CompactBlocks to work
effectively to reduce block transmission time). Other miners would have an
incentive to intentionally orphan blocks that contained a large number of
transactions that their nodes were unaware of.

I don't think this sort of attack would last long. Even later when
subsidies are drastically reduced, you would still lose out on significant
genuine fee revenue if your orphan rate increased even 10%(one out of ten
of your poison blocks intentionally orphaned by another miner).

On Dec 11, 2016 11:12 AM, "s7r via bitcoin-dev" <
Post by t. khan via bitcoin-dev
Miners 'gaming' the Block75 system -
There is no financial incentive for miners to attempt to game the
Block75 system. Even if it were attempted and assuming the goal was to
create bigger blocks, the maximum possible increase would be 25% over
the previous block size. And, that size would only last for two weeks
before readjusting down. It would cost them more in transaction fees to
stuff the network than they could ever make up. To game the system,
they'd have to game it forever with no possibility of profit.
This is an incentive, if few miners agree to create a large conglomerate
that will ultimately control the network.

You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks.
Post by t. khan via bitcoin-dev
Blocks would get too big -
Eventually, blocks would get too big, but only if bandwidth stopped
increasing and the cost of disk space stopped decreasing. Otherwise, the
incremental adjustments made by Block75 (especially in combination with
SegWit) wouldn't break anyone's connection or result in significantly
more orphaned blocks.
Topology and bandwidth speed / hash rate of the network cannot be
controlled - if we make assumptions about these it might have terrible
consequences.

Even if we take in consideration that bandwidth will only grow and disk
space will only cost less (which is not something we can safely assume,
by the way) the hard limit max. block size cannot grow to unlimited
value (even if the growth happens over time). There is also a validation
cost in time for each block, for the health of the network any node
should be able to download _and_ validate a block, before next block
gets mined.

You said in another post that a permanent solution is preferred, rather
than kicking the can down the road. I fully agree, as well as many
others reading this list, but the permanent solution doesn't necessarily
have to be increasing the max block size dynamically.

If you think about it the other way around, dynamically growing the max
block size is also kicking the can down the road ... just without having
to touch it and get dust on the boot ;)
s7r via bitcoin-dev
2016-12-11 23:22:53 UTC
Permalink
Post by Andrew Johnson via bitcoin-dev
"You miss something obvious that makes this attack actually free of cost.
Nothing will "cost them more in transaction fees". A miner can create
thousands of transactions paying to himself, and not broadcast them to
the network, but hold them and include them in the blocks he mines. The
fees are collected by him because transactions are included in a block
that he mined and the left amount is in another wallet of the same
person. Repeat this continuously to fill blocks."
This is easily detectable as long as the network isn't heavily
partitioned(which is an assumption we make today in order for
transaction propagation to work reliably as well as for xThin and
CompactBlocks to work effectively to reduce block transmission time).
Other miners would have an incentive to intentionally orphan blocks that
contained a large number of transactions that their nodes were unaware of.
I don't think this sort of attack would last long. Even later when
subsidies are drastically reduced, you would still lose out on
significant genuine fee revenue if your orphan rate increased even
10%(one out of ten of your poison blocks intentionally orphaned by
another miner).
I disagree.

I didn't say this is impossible to detect, but it is hard to act against
it. One miner orphaning the block intentionally is very unlikely if that
miner acts rationally. It would only make sense if 51% of the hash rate
would intentionally orphan it. Otherwise the miner who intentionally
orphans a valid block, let's say block X, has to continue to mine one in
its place on top of block X-1, and by the time he finds one:

a) his block X' is rejected by other miners because they already have a
valid block X on top of which they already started to mine;

b) block X+1 was already found and broadcasted, so the miner who
orphaned X intentionally is on the shorter chain ignored by the network.

So, one miner cannot do anything about it. Even a pool cannot do
anything about it, because the loss is greater. You need 51% of the hash
rate to intentionally orphan it, and all the miners forming 51% need to
be colluding and know for sure that every one will intentionally orphan
the said block, otherwise there's a huge risk of loss for who does it.
Nobody would gamble to do this (I am not sure if gambling is the right
word, since the loss is 100% sure here). But, we are not discussing 51%
attacks because those are a different topic.

Daniele Pinna via bitcoin-dev
2016-12-10 12:23:49 UTC
Permalink
We have models for estimating the probability that a block is orphaned
given average network bandwidth and block size.

The question is, do we have objective measures of these two quantities?
Couldn't we target an orphan_rate < max_rate?



On Dec 10, 2016 1:01 PM, <bitcoin-dev-***@lists.linuxfoundation.org>
wrote:

Send bitcoin-dev mailing list submissions to
bitcoin-***@lists.linuxfoundation.org

To subscribe or unsubscribe via the World Wide Web, visit
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
or, via email, send a message with subject or body 'help' to
bitcoin-dev-***@lists.linuxfoundation.org

You can reach the person managing the list at
bitcoin-dev-***@lists.linuxfoundation.org

When replying, please edit your Subject line so it is more specific
than "Re: Contents of bitcoin-dev digest..."


Today's Topics:

1. Managing block size the same way we do difficulty (aka
Block75) (t. khan)
2. Re: Managing block size the same way we do difficulty (aka
Block75) (s7r)


----------------------------------------------------------------------

Message: 1
Date: Mon, 5 Dec 2016 10:27:32 -0500
From: "t. khan" <***@gmail.com>
To: bitcoin-***@lists.linuxfoundation.org
Subject: [bitcoin-dev] Managing block size the same way we do
difficulty (aka Block75)
Message-ID:
<CAGCNRJqdu7DMC+AMR4mYKRAYStRMKVGqbnjtEfmzcoeMij5u=***@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

BIP Proposal - Managing Bitcoin?s block size the same way we do difficulty
(aka Block75)

The every two-week adjustment of difficulty has proven to be a reasonably
effective and predictable way of managing how quickly blocks are mined.
Bitcoin needs a reasonably effective and predictable way of managing the
maximum block size.

It?s clear at this point that human beings should not be involved in the
determination of max block size, just as they?re not involved in deciding
the difficulty.

Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should be
adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.

Put another way: let?s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we?ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
?Block75?.

The target capacity over 2016 blocks would be 75%. If the last 2016 blocks
are more than 75% full, add the difference to the max block size. Like this:

MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY

To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)

For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be 1,100
KB until the next 2016 blocks are mined, then reset and recalculate. The
1,000,000 byte limit that exists currently would remain, but would
effectively be the minimum max block size.

Another two weeks goes by, the last 2016 blocks are again 85% full, but now
that means they average 935 KB out of the 1,100 KB max block size. This is
93.5% of the 1,000,000 byte limit, so 18.5% would be added to that to make
the new max block size of 1,185 KB.

Another two weeks passes. This time, the average block is 1,050 KB. The new
max block size is calculated to 1,300 KB (as blocks were 105% full, minus
the 75% capacity target, so 30% added to max block size).

Repeat every 2016 blocks, forever.

If Block75 had been applied at the difficulty adjustment on November 18th,
the max block size would have been 1,080KB, as the average block during
that period was 83% full, so 8% is added to the 1,000KB limit. The current
size, after the December 2nd adjustment would be 1,150K.

Block75 would allow the max block size to grow (or shrink) in response to
transaction volume, and does so predictably, reasonably quickly, and in a
method that prevents wild swings in block size or transaction fees. It
attempts to keep blocks at 75% total capacity over each two week period,
the same way difficulty tries to keep blocks mined every ten minutes. It
also keeps blocks as small as possible.

Thoughts?

-t.k.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161205/c24d6c6d/attachment-0001.html>

------------------------------

Message: 2
Date: Sat, 10 Dec 2016 12:44:31 +0200
From: s7r <***@sky-ip.org>
To: bitcoin-***@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] Managing block size the same way we do
difficulty (aka Block75)
Message-ID: <c318f76d-0904-2e1b-453b-***@sky-ip.org>
Content-Type: text/plain; charset="utf-8"
BIP Proposal - Managing Bitcoin?s block size the same way we do
difficulty (aka Block75)
The every two-week adjustment of difficulty has proven to be a
reasonably effective and predictable way of managing how quickly blocks
are mined. Bitcoin needs a reasonably effective and predictable way of
managing the maximum block size.
It?s clear at this point that human beings should not be involved in the
determination of max block size, just as they?re not involved in
deciding the difficulty.
Instead of setting an arbitrary max block size (1MB, 2MB, 8MB, etc.) or
passing the decision to miners/pool operators, the max block size should
be adjusted every two weeks (2016 blocks) using a system similar to how
difficulty is calculated.
Put another way: let?s stop thinking about what the max block size
should be and start thinking about how full we want the average block to
be regardless of size. Over the last year, we?ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this
concept ?Block75?.
The target capacity over 2016 blocks would be 75%. If the last 2016
blocks are more than 75% full, add the difference to the max block size.
MAX_BLOCK_BASE_SIZE = 1000000
TARGET_CAPACITY = 750000
AVERAGE_OVER_CAP = average block size of last 2016 blocks minus
TARGET_CAPACITY
To check if a block is valid, ? (MAX_BLOCK_BASE_SIZE + AVERAGE_OVER_CAP)
For example, if the last 2016 blocks are 85% full (average block is 850
KB), add 10% to the max block size. The new max block size would be
1,100 KB until the next 2016 blocks are mined, then reset and
recalculate. The 1,000,000 byte limit that exists currently would
remain, but would effectively be the minimum max block size.
Another two weeks goes by, the last 2016 blocks are again 85% full, but
now that means they average 935 KB out of the 1,100 KB max block size.
This is 93.5% of the 1,000,000 byte limit, so 18.5% would be added to
that to make the new max block size of 1,185 KB.
Another two weeks passes. This time, the average block is 1,050 KB. The
new max block size is calculated to 1,300 KB (as blocks were 105% full,
minus the 75% capacity target, so 30% added to max block size).
Repeat every 2016 blocks, forever.
If Block75 had been applied at the difficulty adjustment on November
18th, the max block size would have been 1,080KB, as the average block
during that period was 83% full, so 8% is added to the 1,000KB limit.
The current size, after the December 2nd adjustment would be 1,150K.
Block75 would allow the max block size to grow (or shrink) in response
to transaction volume, and does so predictably, reasonably quickly, and
in a method that prevents wild swings in block size or transaction fees.
It attempts to keep blocks at 75% total capacity over each two week
period, the same way difficulty tries to keep blocks mined every ten
minutes. It also keeps blocks as small as possible.
Thoughts?
-t.k.
I like the idea. It is good wrt growing the max. block size
automatically without human action, but the main problem (or question)
is not how to grow this number, it is what number can the network
handle, considering both miners and users. While disk space requirements
might not be a big problem, block propagation time is. The time required
for a block to propagate in the network (or at least to all the miners)
is directly dependent of its size. If blocks take too much time to
propagate in the network, the orphan rate will increase in unpredictable
ways. For example if the internet speed in China is worse than in
Europe, and miners in China have more than 50% of the hashing power,
blocks mined by European miners might get orphaned.

The system as described can also be gamed, by filling the network with
transactions. Miners have the monetary interest to include as many
transactions as possible in a block in order to collect the fees.
Regardless how you think about it, there has to be a maximum block size
that the network will allow as a consensus rule. Increasing it
dynamically based on transaction volume will reach a point where the
number got big enough that it broke things. Bitcoin, because its
fundamental design, can scale by using offchain solutions.

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20161210/c231038d/attachment-0001.sig>

------------------------------

_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


End of bitcoin-dev Digest, Vol 19, Issue 4
******************************************
Pieter Wuille via bitcoin-dev
2016-12-10 17:39:57 UTC
Permalink
On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
Post by Daniele Pinna via bitcoin-dev
We have models for estimating the probability that a block is orphaned
given average network bandwidth and block size.
The question is, do we have objective measures of these two quantities?
Couldn't we target an orphan_rate < max_rate?
Models can predict orphan rate given block size and network/hashrate
topology, but you can't control the topology (and things like FIBRE hide
the effect of block size on this as well). The result is that if you're
purely optimizing for minimal orphan rate, you can end up with a single
(conglomerate of) pools producing all the blocks. Such a setup has no
propagation delay at all, and as a result can always achieve 0 orphans.

Cheers,
--
Pieter
Daniele Pinna via bitcoin-dev
2016-12-11 03:17:45 UTC
Permalink
How is the adverse scenario you describe different from a plain old 51%
attack? Each proposed protocol change where 51% or more of the network
can potentially game the rules and break the system should be considered
just as acceptable/unacceptable as another.

There comes a point where some form of basic honesty must be assumed on
behalf of participants benefiting from the system working properly and
reliably.

Afterall, what magic line of code prohibits all miners from simultaneously
turning all their equipment off... just because?

Maybe this 'one':

"As long as a majority of CPU power is controlled by nodes that are not
cooperating to attack the network, they'll generate the longest chain and
outpace attackers. The network itself requires minimal structure."

Is there such a thing as an unrecognizable 51% attack? One where the
remaining 49% get dragged in against their will?

Daniele
Post by Pieter Wuille via bitcoin-dev
On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
Post by Daniele Pinna via bitcoin-dev
We have models for estimating the probability that a block is orphaned
given average network bandwidth and block size.
The question is, do we have objective measures of these two quantities?
Couldn't we target an orphan_rate < max_rate?
Models can predict orphan rate given block size and network/hashrate
topology, but you can't control the topology (and things like FIBRE hide
the effect of block size on this as well). The result is that if you're
purely optimizing for minimal orphan rate, you can end up with a single
(conglomerate of) pools producing all the blocks. Such a setup has no
propagation delay at all, and as a result can always achieve 0 orphans.
Cheers,
--
Pieter
Eric Voskuil via bitcoin-dev
2016-12-11 05:29:08 UTC
Permalink
The presumption of the mining aspect of the Bitcoin security model is that the mining majority is a broadly distributed set of independent people, not one person who controls a majority of the hash power.

You seem to have overlooked a qualifier in your Satoshi quote: "...by nodes that are not cooperating to attack the network". A single miner with majority hash power is of course cooperating with himself. At that point the question of whether he is attacking the network is moot, it's his network.

I believe that Pieter's point is that a system optimized for orphan rate may in effect be optimized for a single entity providing all double spend protection. That works directly against the central principle of Bitcoin security. The security of the money is a function of the number of independent miners and sellers.

e
How is the adverse scenario you describe different from a plain old 51% attack? Each proposed protocol change where 51% or more of the network can potentially game the rules and break the system should be considered just as acceptable/unacceptable as another.
There comes a point where some form of basic honesty must be assumed on behalf of participants benefiting from the system working properly and reliably.
Afterall, what magic line of code prohibits all miners from simultaneously turning all their equipment off... just because?
"As long as a majority of CPU power is controlled by nodes that are not cooperating to attack the network, they'll generate the longest chain and outpace attackers. The network itself requires minimal structure."
Is there such a thing as an unrecognizable 51% attack? One where the remaining 49% get dragged in against their will?
Daniele
We have models for estimating the probability that a block is orphaned given average network bandwidth and block size.
The question is, do we have objective measures of these two quantities? Couldn't we target an orphan_rate < max_rate?
Models can predict orphan rate given block size and network/hashrate topology, but you can't control the topology (and things like FIBRE hide the effect of block size on this as well). The result is that if you're purely optimizing for minimal orphan rate, you can end up with a single (conglomerate of) pools producing all the blocks. Such a setup has no propagation delay at all, and as a result can always achieve 0 orphans.
Cheers,
--
Pieter
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Adam Back via bitcoin-dev
2016-12-11 09:21:01 UTC
Permalink
Well I think empirical game-theory observed on the network involves more
types of strategy than honest vs dishonest. At least 4, maybe 5 types of
strategy and I would argue lumping the strategies together results in
incorrect game theory conclusions and predictions.

A) altruistic players (protocol following by principle to be good network
citizens, will forgo incremental profits to aid network health) eg aim to
decentralize hashrate, will mine stuck transactions for free, run pools
with zero fee, put more effort into custom spam filtering, tend to be power
users, or long term invested etc.

B) honest players (protocol following but non-altruistic or just
lazy/asleep run default software, but still leaving some dishonest profit
untaken). Eg reject spy mining, but no charitable actions, will not
retaliate in kind to semi-honest zero sum attacks that reduce their profits.

C) semi-honest (will violate protocol if their attack can be plausibly
deniable or argued to be not hugely damaging to network security). Eg spy
mining, centralised pools increasing other miners orphan rates.

D) rational players (will violate the protocol for profit: will not overtly
steal from users via double spends, but anything short particularly
disadvantaging other miners even if it results in centralisation is treated
as fair game) eg selfish mining. Would increase block size by filling with
pay to self transactions, if it increased orphans for others.

E) dishonest players (aka hyper-rational: will actually steal from users
probabilistically if possible, not as worried about detection). Eg double
spend and probabilistic double spends (against onchain gambling games).
Would DDoS competing pools.

In part the strategies depend on investment horizon, it is long term
rational for altruistic behavior to forgo incremental short term profit to
improve user experience. Hyper-rational to buy votes in a "ends justify
means" mentality though fortunately most network players are not dishonest.

So called meta-incentive (unwillingness to risk hurting bitcoin due to
intended long term ho dling coins or ASICs) can also explain bias towards
honest or altruistic strategies.

Renting too much hashrate is risky as it can avoid the meta-incentive and
increase rational or dishonest strategies.

In particular re differentiating from 51% attack so long as > 50% are
semi-honest, honest or altruistic it won't happen. It would seem actually
that > 66-75% are because we have not seen selfish mining on the network.
Though I think conveniently slow block publication by some players in the
60% spy mining semi-honest cartel was seen for a while, the claim has been
it was short-lived and due to technical issue.

It would be interesting to try to categorise and estimate the network %
engaging in each strategy. I think the information is mostly known.

Adam

On Dec 11, 2016 03:22, "Daniele Pinna via bitcoin-dev" <
Post by Daniele Pinna via bitcoin-dev
How is the adverse scenario you describe different from a plain old 51%
attack? Each proposed protocol change where 51% or more of the network
can potentially game the rules and break the system should be considered
just as acceptable/unacceptable as another.
There comes a point where some form of basic honesty must be assumed on
behalf of participants benefiting from the system working properly and
reliably.
Afterall, what magic line of code prohibits all miners from simultaneously
turning all their equipment off... just because?
"As long as a majority of CPU power is controlled by nodes that are not
cooperating to attack the network, they'll generate the longest chain and
outpace attackers. The network itself requires minimal structure."
Is there such a thing as an unrecognizable 51% attack? One where the
remaining 49% get dragged in against their will?
Daniele
Post by Pieter Wuille via bitcoin-dev
On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
Post by Daniele Pinna via bitcoin-dev
We have models for estimating the probability that a block is orphaned
given average network bandwidth and block size.
The question is, do we have objective measures of these two quantities?
Couldn't we target an orphan_rate < max_rate?
Models can predict orphan rate given block size and network/hashrate
topology, but you can't control the topology (and things like FIBRE hide
the effect of block size on this as well). The result is that if you're
purely optimizing for minimal orphan rate, you can end up with a single
(conglomerate of) pools producing all the blocks. Such a setup has no
propagation delay at all, and as a result can always achieve 0 orphans.
Cheers,
--
Pieter
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Bram Cohen via bitcoin-dev
2016-12-10 23:12:25 UTC
Permalink
On Mon, Dec 5, 2016 at 7:27 AM, t. khan via bitcoin-dev <
Post by t. khan via bitcoin-dev
Put another way: let’s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
‘Block75’.
That's effectively making the blocksize limit completely uncapped and only
preventing spikes, and even in the case of spikes it doesn't differentiate
between 'real' traffic and low value spam attacks. It suffers from the same
fundamental problems as bitcoin unlimited: There are in the end no
transaction fees, and inevitably some miners will want to impose some cap
on block size for practical purposes, resulting in a fork.

Difficulty adjustment works because there's a clear goal of having a
certain rate of making new blocks. Without a target to attempt automatic
adjustment makes no sense.
t. khan via bitcoin-dev
2016-12-11 00:52:58 UTC
Permalink
Agreed, the clear goal of 10 minutes per block is why the difficulty
adjustment works well. Blocks averaging 75% full is the clear goal of the
described method. That's the target to attempt.

Under Block75, there will still be full blocks. There will still be
transaction fees and a fee market. The fees will be lower than they are now
of course.

Hardcoding a cap will inevitably become a roadblock (again), and we'll be
back in the same position as we are now. Permanent solutions are preferred.
Post by Bram Cohen via bitcoin-dev
On Mon, Dec 5, 2016 at 7:27 AM, t. khan via bitcoin-dev <
Post by t. khan via bitcoin-dev
Put another way: let’s stop thinking about what the max block size should
be and start thinking about how full we want the average block to be
regardless of size. Over the last year, we’ve had averages of 75% or
higher, so aiming for 75% full seems reasonable, hence naming this concept
‘Block75’.
That's effectively making the blocksize limit completely uncapped and only
preventing spikes, and even in the case of spikes it doesn't differentiate
between 'real' traffic and low value spam attacks. It suffers from the same
fundamental problems as bitcoin unlimited: There are in the end no
transaction fees, and inevitably some miners will want to impose some cap
on block size for practical purposes, resulting in a fork.
Difficulty adjustment works because there's a clear goal of having a
certain rate of making new blocks. Without a target to attempt automatic
adjustment makes no sense.
Loading...