Discussion:
A Proposed Compromise to the Block Size Limit
(too old to reply)
Michael Naber
2015-06-27 14:39:51 UTC
Permalink
Demand to participate in a low-fee global consensus network will likely
continue to rise. Technology already exists to meet that rising demand
using a blockchain with sufficient block size. Whether that blockchain is
Bitcoin Core with an increased block size, or whether it is a fork, market
forces make it almost certain that demand will be met by a blockchain with
adequate capacity. These forces ensure that not only today’s block size
will be increased, but also that future increases will occur should the
demand arise.

In order to survive, Bitcoin Core must remain the lowest-fee,
highest-capacity, most secure, distributed, fastest, overall best solution
possible to the global consensus problem. Attempting to artificially
constrain the block size below the limits of technology for any reason is a
conflict with this objective and a threat to the survival of Bitcoin Core.
At the same time, scheduling large future increases or permitting unlimited
dynamic scaling of the block size limit raises concerns over availability
of future computing resources. Instead, we should manually increase the
block size limit as demand occurs, except in the special case that
increasing the limit would cause an undue burden upon users wishing to
validate the integrity of the blockchain.

Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in the
special case above is a reasonable path forward?
Peter Todd
2015-06-27 15:21:25 UTC
Permalink
Post by Michael Naber
Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in the
special case above is a reasonable path forward?
It's not a reasonable path forward right now given the lack of testing done with 8MB+ blocks, among many other problems. A way to help make that appear more reasonable would be to setup a 8MB testnet as I suggested, with two years or so of 8MB blocks in history as well as a large UTXO set to test performance characteristics.

Of course, that'll be a 840GB download - if that's unreasonable you might want to ask why 8MB blocks are reasonable...
Randi Joseph
2015-06-27 15:29:07 UTC
Permalink
I wish you were just as prudent when you were recommending full RBF to
mining pools.
Post by Peter Todd
It's not a reasonable path forward right now given the lack of testing done with 8MB+ blocks, among many other problems. A way to help make that appear more reasonable would be to setup a 8MB testnet as I suggested, with two years or so of 8MB blocks in history as well as a large UTXO set to test performance characteristics.
--
Randi Joseph
Peter Todd
2015-06-27 15:32:07 UTC
Permalink
Post by Randi Joseph
I wish you were just as prudent when you were recommending full RBF to
mining pools.
You know, if doing that is imprudent, then people are using Bitcoin in a recklessly dangerous way.
Michael Naber
2015-06-27 16:19:04 UTC
Permalink
That test seems like a reasonable suggestion; 840GB is not prohibitive
given today's computing costs. What other than the successful result of
that test would you want to see before agreeing to increase the block size
to 8MB?
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Post by Michael Naber
Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in the
special case above is a reasonable path forward?
It's not a reasonable path forward right now given the lack of testing
done with 8MB+ blocks, among many other problems. A way to help make that
appear more reasonable would be to setup a 8MB testnet as I suggested, with
two years or so of 8MB blocks in history as well as a large UTXO set to
test performance characteristics.
Of course, that'll be a 840GB download - if that's unreasonable you might
want to ask why 8MB blocks are reasonable...
-----BEGIN PGP SIGNATURE-----
iQE9BAEBCAAnIBxQZXRlciBUb2RkIDxwZXRlQHBldGVydG9kZC5vcmc+BQJVjr9n
AAoJEMCF8hzn9Lnc47AIAIIwu4maaJs4pAKpK00jQnhPNIQ8LPvijD/8vvyugA1z
OLxlRrn8zs7JPFbxWOAzK2qzT1RksSd0gbXqWm/Saqk9CAG5LBp7Oq0HAVE23XYt
6BvyhjyhYaZjDrv+SZvlSjdl5xfpDNPMIXMi7XblKD9hm1GIUSVIYAOinOSVIy0B
HlKyn/xc4MaO8DuzQcs0vsNMudVQFLMOLjMWz/7iv41NnB/Ujjzv/6845Z1g7Opf
d5AfxhPHZixshqav/lF7ly7xQwSZZpoJCyFdtzCNG47EQmFYY9e22uy1KVzS7Zeo
qYPi3KRx5+vFtHHJMDYG5EIMTwI4l/4+lY/Sd0CFWss=
=0IOS
-----END PGP SIGNATURE-----
Peter Todd
2015-06-27 17:20:11 UTC
Permalink
Post by Michael Naber
That test seems like a reasonable suggestion; 840GB is not prohibitive
given today's computing costs. What other than the successful result of
that test would you want to see before agreeing to increase the block size
to 8MB?
The two main things you need to show is:

1) Small, anonymous, miners remain approximately as profitable as large
miners, regardless of whether they are in the world, and even when
miners are under attack. Remember I'm talking about mining here, not
just hashing - the process of selling your hashpower to someone else who
is actually doing the mining.

As for "approximately as profitable", based on a 10% profit margin, a 5%
profitability difference between a negligable ~0% hashing power miner
and a 50% hashing power miner is a good standard here.

The hard part here is basically keeping orphan rates low, as the %5
profitability different on %10 profit margin implies an orphan rate of
about 0.5% - roughly what we have right now if not actually a bit lower.
That also implies blocks propagate across the network in just a few
seconds in the worst case, where blocks are being generated with
transactions in them that are not already in mempools - circumventing
propagation optimization techniques. As we're talking about small
miners, we can't assume the miners are directly conneted to each other.
(which itself is dangerous from an attack point of view - if they're
directly connected they can be DoS attacked)

2) Medium to long term plan to pay for hashing power. Without scarcity
of blockchain space there is no reason to think that transaction fees
won't fall to the marginal cost of including a transaction, which
doesn't leave anything to pay for proof-of-work security. A proposal
meeting this criteria will have to be clever if you don't keep the
blocksize sufficiently limited that transaction fees are non-negligable.
One possible approach - if probably politically non-viable - would be to
change the inflation schedule so that the currency is inflated
indefinitely.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Benjamin
2015-06-27 17:26:00 UTC
Permalink
"Thus we have a fixed capacity system where access is mediated by supply
and demand transaction fees."

There is no supply and demand. That would mean users would be able to adapt
fees and get different quality of service depending on current capacity.
For example if peak load is 10x average load, then at those times fees
would be higher and users would delay transactions to smooth out demand.
Post by Peter Todd
Post by Michael Naber
That test seems like a reasonable suggestion; 840GB is not prohibitive
given today's computing costs. What other than the successful result of
that test would you want to see before agreeing to increase the block
size
Post by Michael Naber
to 8MB?
1) Small, anonymous, miners remain approximately as profitable as large
miners, regardless of whether they are in the world, and even when
miners are under attack. Remember I'm talking about mining here, not
just hashing - the process of selling your hashpower to someone else who
is actually doing the mining.
As for "approximately as profitable", based on a 10% profit margin, a 5%
profitability difference between a negligable ~0% hashing power miner
and a 50% hashing power miner is a good standard here.
The hard part here is basically keeping orphan rates low, as the %5
profitability different on %10 profit margin implies an orphan rate of
about 0.5% - roughly what we have right now if not actually a bit lower.
That also implies blocks propagate across the network in just a few
seconds in the worst case, where blocks are being generated with
transactions in them that are not already in mempools - circumventing
propagation optimization techniques. As we're talking about small
miners, we can't assume the miners are directly conneted to each other.
(which itself is dangerous from an attack point of view - if they're
directly connected they can be DoS attacked)
2) Medium to long term plan to pay for hashing power. Without scarcity
of blockchain space there is no reason to think that transaction fees
won't fall to the marginal cost of including a transaction, which
doesn't leave anything to pay for proof-of-work security. A proposal
meeting this criteria will have to be clever if you don't keep the
blocksize sufficiently limited that transaction fees are non-negligable.
One possible approach - if probably politically non-viable - would be to
change the inflation schedule so that the currency is inflated
indefinitely.
--
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter Todd
2015-06-27 17:37:24 UTC
Permalink
Post by Benjamin
"Thus we have a fixed capacity system where access is mediated by supply
and demand transaction fees."
There is no supply and demand. That would mean users would be able to adapt
fees and get different quality of service depending on current capacity.
For example if peak load is 10x average load, then at those times fees
would be higher and users would delay transactions to smooth out demand.
That's exactly how Bitcoin works already. See my article on how
transaction fees work for more details:

https://gist.github.com/petertodd/8e87c782bdf342ef18fb
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Benjamin
2015-06-27 17:46:55 UTC
Permalink
There is no ensured Quality of service, is there? If you "bid" higher, then
you don't know what you are going to get. Also because you have no way of
knowing what *others* are bidding. Only if you have auctions (increasing
increments) you can establish a feedback loop to settle demand and supply.
And the supply side doesn't adapt. Adapting supply would help resolve parts
of the capacity problem.
Post by Peter Todd
Post by Benjamin
"Thus we have a fixed capacity system where access is mediated by supply
and demand transaction fees."
There is no supply and demand. That would mean users would be able to
adapt
Post by Benjamin
fees and get different quality of service depending on current capacity.
For example if peak load is 10x average load, then at those times fees
would be higher and users would delay transactions to smooth out demand.
That's exactly how Bitcoin works already. See my article on how
https://gist.github.com/petertodd/8e87c782bdf342ef18fb
--
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Peter Todd
2015-06-27 17:54:28 UTC
Permalink
Post by Benjamin
There is no ensured Quality of service, is there? If you "bid" higher, then
you don't know what you are going to get. Also because you have no way of
knowing what *others* are bidding. Only if you have auctions (increasing
increments) you can establish a feedback loop to settle demand and supply.
And the supply side doesn't adapt. Adapting supply would help resolve parts
of the capacity problem.
There's lots of markets where there is no assured quality of service,
and where the bids others are making aren't known. Most financial
markets work that way - there's only ever probabalistic guarantees that
for a given amount of money you'll be able to buy a certain amount of
gold at any given time for instance. Similarly for nearly all
commodities the infrastructure required to mine those commodities has
very little room for short, medium, or even long-term production
increases, so whatever the production supply is at a given time is
pretty much fixed.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Venzen Khaosan
2015-06-27 17:58:59 UTC
Permalink
Very interesting point and comparison. So the fee market is unknown,
similar to a market maker's orderbook - except in the case of Bitcoin
it is not being deliberately hidden from users, its just not knowable
how miners are positioning at any given moment.
Post by Peter Todd
Post by Benjamin
There is no ensured Quality of service, is there? If you "bid"
higher, then you don't know what you are going to get. Also
because you have no way of knowing what *others* are bidding.
Only if you have auctions (increasing increments) you can
establish a feedback loop to settle demand and supply. And the
supply side doesn't adapt. Adapting supply would help resolve
parts of the capacity problem.
There's lots of markets where there is no assured quality of
service, and where the bids others are making aren't known. Most
financial markets work that way - there's only ever probabalistic
guarantees that for a given amount of money you'll be able to buy a
certain amount of gold at any given time for instance. Similarly
for nearly all commodities the infrastructure required to mine
those commodities has very little room for short, medium, or even
long-term production increases, so whatever the production supply
is at a given time is pretty much fixed.
_______________________________________________ bitcoin-dev mailing
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Benjamin
2015-06-27 19:34:16 UTC
Permalink
Post by Benjamin
Post by Benjamin
There is no ensured Quality of service, is there? If you "bid" higher,
then
Post by Benjamin
you don't know what you are going to get. Also because you have no way of
knowing what *others* are bidding. Only if you have auctions (increasing
increments) you can establish a feedback loop to settle demand and
supply.
Post by Benjamin
And the supply side doesn't adapt. Adapting supply would help resolve
parts
Post by Benjamin
of the capacity problem.
There's lots of markets where there is no assured quality of service,
and where the bids others are making aren't known. Most financial
markets work that way - there's only ever probabalistic guarantees that
for a given amount of money you'll be able to buy a certain amount of
gold at any given time for instance. Similarly for nearly all
commodities the infrastructure required to mine those commodities has
very little room for short, medium, or even long-term production
increases, so whatever the production supply is at a given time is
pretty much fixed.
hmm? if the current ask for 1 ounce of gold is 100$, then you need to bid
100$ to get 1 ounce of gold. If tomorrow everyone agree 1ounce of gold
should be worth 200$, then the bid moves accordingly. of course production
changes based on prices. otherwise the economy would not function. if price
of some stuff goes up, more people produce that stuff. in terms of a price
for a transaction and the use of a blockchain, unfortunately there is not a
way to just add computational supply. that's an inherent weakness of how
blockchains are structured. ideally it would be as simple as demanding more
resources as in scaling a webservices with AWS.
Adam Back
2015-06-27 15:33:11 UTC
Permalink
Bitcoin Core must remain the lowest-fee, highest-capacity, most secure, distributed, fastest, overall best solution possible to the global consensus problem.
Everyone here is excited about the potential of Bitcoin and would
aspirationally like it to reach its full potential as fast as
possible. But the block-size is not a free variable, half those
parameters you listed are in conflict with each other. We're trying
to improve both decentralisation and throughput short-term while
people work on algorithmic improvements mid-term. If you are
interested you can take a look through the proposals:

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008603.html

Note that probably 99% of Bitcoin transactions already happen
off-chain in exchanges, tipping services, hosted wallets etc. Maybe
you're already using them, assuming you are a bitcoin user.
They constitute an early stage layer 2, some of them even have on
chain netting and scale faster than the block-chain.

You can also read about layer 2, the lightning network paper and the
duplex micropayment channel paper:

http://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf

and read the development list and look at the code:

http://lists.linuxfoundation.org/pipermail/lightning-dev/
https://github.com/ElementsProject/lightning

Adam
Demand to participate in a low-fee global consensus network will likely
continue to rise. Technology already exists to meet that rising demand using
a blockchain with sufficient block size. Whether that blockchain is Bitcoin
Core with an increased block size, or whether it is a fork, market forces
make it almost certain that demand will be met by a blockchain with adequate
capacity. These forces ensure that not only today’s block size will be
increased, but also that future increases will occur should the demand
arise.
In order to survive, Bitcoin Core must remain the lowest-fee,
highest-capacity, most secure, distributed, fastest, overall best solution
possible to the global consensus problem. Attempting to artificially
constrain the block size below the limits of technology for any reason is a
conflict with this objective and a threat to the survival of Bitcoin Core.
At the same time, scheduling large future increases or permitting unlimited
dynamic scaling of the block size limit raises concerns over availability of
future computing resources. Instead, we should manually increase the block
size limit as demand occurs, except in the special case that increasing the
limit would cause an undue burden upon users wishing to validate the
integrity of the blockchain.
Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in the
special case above is a reasonable path forward?
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Michael Naber
2015-06-27 16:09:16 UTC
Permalink
The goal of Bitcoin Core is to meet the demand for global consensus as
effectively as possible. Please let's keep the conversation on how to best
meet that goal.

The off-chain solutions you enumerate are are useful solutions in their
respective domains, but none of them solves the global consensus problem
with any greater efficiency than Bitcoin does.
Bitcoin Core must remain the lowest-fee, highest-capacity, most secure,
distributed, fastest, overall best solution possible to the global
consensus problem.
Everyone here is excited about the potential of Bitcoin and would
aspirationally like it to reach its full potential as fast as
possible. But the block-size is not a free variable, half those
parameters you listed are in conflict with each other. We're trying
to improve both decentralisation and throughput short-term while
people work on algorithmic improvements mid-term. If you are
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008603.html
Note that probably 99% of Bitcoin transactions already happen
off-chain in exchanges, tipping services, hosted wallets etc. Maybe
you're already using them, assuming you are a bitcoin user.
They constitute an early stage layer 2, some of them even have on
chain netting and scale faster than the block-chain.
You can also read about layer 2, the lightning network paper and the
http://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
http://lists.linuxfoundation.org/pipermail/lightning-dev/
https://github.com/ElementsProject/lightning
Adam
Demand to participate in a low-fee global consensus network will likely
continue to rise. Technology already exists to meet that rising demand
using
a blockchain with sufficient block size. Whether that blockchain is
Bitcoin
Core with an increased block size, or whether it is a fork, market forces
make it almost certain that demand will be met by a blockchain with
adequate
capacity. These forces ensure that not only today’s block size will be
increased, but also that future increases will occur should the demand
arise.
In order to survive, Bitcoin Core must remain the lowest-fee,
highest-capacity, most secure, distributed, fastest, overall best
solution
possible to the global consensus problem. Attempting to artificially
constrain the block size below the limits of technology for any reason
is a
conflict with this objective and a threat to the survival of Bitcoin
Core.
At the same time, scheduling large future increases or permitting
unlimited
dynamic scaling of the block size limit raises concerns over
availability of
future computing resources. Instead, we should manually increase the
block
size limit as demand occurs, except in the special case that increasing
the
limit would cause an undue burden upon users wishing to validate the
integrity of the blockchain.
Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in
the
special case above is a reasonable path forward?
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Mark Friedenbach
2015-06-27 16:28:26 UTC
Permalink
I really suggest you look into the layer2 systems Adam pointed to, as you
appear to be misinformed about their properties. There are many proposals
which really do achieve global consensus using the block chain, just in a
delayed (and cached) fashion that is still 100% safe.

It is possible to go off-chain without losing the trustlessness and
security of the block chain.
Post by Michael Naber
The goal of Bitcoin Core is to meet the demand for global consensus as
effectively as possible. Please let's keep the conversation on how to best
meet that goal.
The off-chain solutions you enumerate are are useful solutions in their
respective domains, but none of them solves the global consensus problem
with any greater efficiency than Bitcoin does.
Bitcoin Core must remain the lowest-fee, highest-capacity, most secure,
distributed, fastest, overall best solution possible to the global
consensus problem.
Everyone here is excited about the potential of Bitcoin and would
aspirationally like it to reach its full potential as fast as
possible. But the block-size is not a free variable, half those
parameters you listed are in conflict with each other. We're trying
to improve both decentralisation and throughput short-term while
people work on algorithmic improvements mid-term. If you are
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-June/008603.html
Note that probably 99% of Bitcoin transactions already happen
off-chain in exchanges, tipping services, hosted wallets etc. Maybe
you're already using them, assuming you are a bitcoin user.
They constitute an early stage layer 2, some of them even have on
chain netting and scale faster than the block-chain.
You can also read about layer 2, the lightning network paper and the
http://lightning.network/lightning-network-paper-DRAFT-0.5.pdf
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
http://lists.linuxfoundation.org/pipermail/lightning-dev/
https://github.com/ElementsProject/lightning
Adam
Demand to participate in a low-fee global consensus network will likely
continue to rise. Technology already exists to meet that rising demand
using
a blockchain with sufficient block size. Whether that blockchain is
Bitcoin
Core with an increased block size, or whether it is a fork, market
forces
make it almost certain that demand will be met by a blockchain with
adequate
capacity. These forces ensure that not only today’s block size will be
increased, but also that future increases will occur should the demand
arise.
In order to survive, Bitcoin Core must remain the lowest-fee,
highest-capacity, most secure, distributed, fastest, overall best
solution
possible to the global consensus problem. Attempting to artificially
constrain the block size below the limits of technology for any reason
is a
conflict with this objective and a threat to the survival of Bitcoin
Core.
At the same time, scheduling large future increases or permitting
unlimited
dynamic scaling of the block size limit raises concerns over
availability of
future computing resources. Instead, we should manually increase the
block
size limit as demand occurs, except in the special case that increasing
the
limit would cause an undue burden upon users wishing to validate the
integrity of the blockchain.
Compromise: Can we agree that raising the block size to a static 8MB now
with a plan to increase it further should demand necessitate except in
the
special case above is a reasonable path forward?
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter Todd
2015-06-27 16:37:31 UTC
Permalink
Post by Michael Naber
The goal of Bitcoin Core is to meet the demand for global consensus as
effectively as possible. Please let's keep the conversation on how to best
meet that goal.
Keep in mind that Andresen and Hearn both propose that the majority of
Bitcoin users, even businesses, abandon the global consensus technology
aspect of Bitcoin - running full nodes - and instead adopt trust
technology instead - running SPV nodes.

We're very much focused on meeting the demand for global consensus
technology, but unfortunately global consensus is also has inherently
O(n^2) scaling with current approaches available. Thus we have a fixed
capacity system where access is mediated by supply and demand
transaction fees.
Post by Michael Naber
The off-chain solutions you enumerate are are useful solutions in their
respective domains, but none of them solves the global consensus problem
with any greater efficiency than Bitcoin does.
Solutions like (hub-and-spoke) payment channels, Lightning, etc. allow
users of the global consensus technology in Bitcoin to use that
technology in much more effcient ways, leveraging a relatively small
amount of global consensus to do large numbers of transactions
trustlessly.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Michael Naber
2015-06-27 17:25:14 UTC
Permalink
Global network consensus means that there is global network recognition
that a particular transaction has occurred and is irreversible. The
off-chain solutions you describe, while probably useful for other purposes,
do not exhibit this characteristic and so they are not global network
consensus networks.

Bitcoin Core scales as O(N), where N is the number of transactions. Can we
do better than this while still achieving global consensus?
Post by Peter Todd
Post by Michael Naber
The goal of Bitcoin Core is to meet the demand for global consensus as
effectively as possible. Please let's keep the conversation on how to
best
Post by Michael Naber
meet that goal.
Keep in mind that Andresen and Hearn both propose that the majority of
Bitcoin users, even businesses, abandon the global consensus technology
aspect of Bitcoin - running full nodes - and instead adopt trust
technology instead - running SPV nodes.
We're very much focused on meeting the demand for global consensus
technology, but unfortunately global consensus is also has inherently
O(n^2) scaling with current approaches available. Thus we have a fixed
capacity system where access is mediated by supply and demand
transaction fees.
Post by Michael Naber
The off-chain solutions you enumerate are are useful solutions in their
respective domains, but none of them solves the global consensus problem
with any greater efficiency than Bitcoin does.
Solutions like (hub-and-spoke) payment channels, Lightning, etc. allow
users of the global consensus technology in Bitcoin to use that
technology in much more effcient ways, leveraging a relatively small
amount of global consensus to do large numbers of transactions
trustlessly.
--
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Peter Todd
2015-06-27 17:34:51 UTC
Permalink
Post by Michael Naber
Global network consensus means that there is global network recognition
that a particular transaction has occurred and is irreversible. The
off-chain solutions you describe, while probably useful for other purposes,
do not exhibit this characteristic and so they are not global network
consensus networks.
Hub-and-spoke payment channels and the Lightning network are not
off-chain solutions, they are ways to more efficiently use on-chain
transactions to achive the goal of moving assets from point a to point
b, resulting in more economic transactions being done with fewer - but
not zero! - blockchain transactions.

Off-chain transaction systems such as Changetip allow economic
transactions to happen with no blockchain transactions at all.
Post by Michael Naber
Bitcoin Core scales as O(N), where N is the number of transactions. Can we
do better than this while still achieving global consensus?
No, Bitcoin the network scales with O(n^2) with your above criteria, as
each node creates k transactions, thus each node has to verify k*n
transactions, resulting in O(n^2) total work.

For Bitcoin to have O(n) scaling you have to assume that the number of
validation nodes doesn't scale with the number of users, thus resulting
in a system where users trust others to do validation for them. That is
not a global consensus system; that's a trust-based system.

There's nothing inherently wrong with that, but why change Bitcoin
itself into a trust-based system, when you can preserve the global
consensus functionality, and built a trust-based system on top of it?
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Jameson Lopp
2015-06-27 18:02:05 UTC
Permalink
Post by Michael Naber
Post by Michael Naber
Global network consensus means that there is global network recognition
that a particular transaction has occurred and is irreversible. The
off-chain solutions you describe, while probably useful for other
purposes,
Post by Michael Naber
do not exhibit this characteristic and so they are not global network
consensus networks.
Hub-and-spoke payment channels and the Lightning network are not
off-chain solutions, they are ways to more efficiently use on-chain
transactions to achive the goal of moving assets from point a to point
b, resulting in more economic transactions being done with fewer - but
not zero! - blockchain transactions.
Off-chain transaction systems such as Changetip allow economic
transactions to happen with no blockchain transactions at all.
Post by Michael Naber
Bitcoin Core scales as O(N), where N is the number of transactions. Can
we
Post by Michael Naber
do better than this while still achieving global consensus?
No, Bitcoin the network scales with O(n^2) with your above criteria, as
each node creates k transactions, thus each node has to verify k*n
transactions, resulting in O(n^2) total work.
For Bitcoin to have O(n) scaling you have to assume that the number of
validation nodes doesn't scale with the number of users, thus resulting
in a system where users trust others to do validation for them. That is
not a global consensus system; that's a trust-based system.
Why does it matter what the "total work" of the network is? Anyone who is
participating as a node on the network only cares about the resources
required to run their own node, not the resources everyone else needs to
run their nodes.

Also, no assumption needed, it is quite clear that the number of nodes is
not scaling along with the number of users. If anything it appears to be
inversely proportional.
Post by Michael Naber
There's nothing inherently wrong with that, but why change Bitcoin
itself into a trust-based system, when you can preserve the global
consensus functionality, and built a trust-based system on top of it?
--
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter Todd
2015-06-27 18:47:51 UTC
Permalink
Post by Jameson Lopp
Post by Peter Todd
For Bitcoin to have O(n) scaling you have to assume that the number of
validation nodes doesn't scale with the number of users, thus resulting
in a system where users trust others to do validation for them. That is
not a global consensus system; that's a trust-based system.
Why does it matter what the "total work" of the network is? Anyone who is
participating as a node on the network only cares about the resources
required to run their own node, not the resources everyone else needs to
run their nodes.
Also, no assumption needed, it is quite clear that the number of nodes is
not scaling along with the number of users. If anything it appears to be
inversely proportional.
Which is a huge problem.

Concretely, what O(n^2) scaling means is that the more Bitcoin is
adopted, the harder it is to use in a decentralized way that doesn't
trust others; the blocksize limit puts a cap on how centralized Bitcoin
can get in a given technological landscape.
--
'peter'[:-1]@petertodd.org
0000000000000000007fc13ce02072d9cb2a6d51fae41fefcde7b3b283803d24
Raystonn
2015-06-28 05:34:23 UTC
Permalink
_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Adam Back
2015-06-28 10:07:40 UTC
Permalink
nodes are limited to 133 connections. This is 8 outgoing connections and
125 incoming connections. [...] Once your full node reaches 133 connections,
it will see no further increase in load [...] Only transaction rate will affect the
load on your node.
The total system cost is more relevant, or total cost per user. I think you
are stuck on the O( t * m ) t = tx, m = nodes thinking. Total cost per user
is increasing. That better scaling algorithms need to be found. That's why
people are working on lightning-like systems.
fear larger blocks based on an assumption of exponential growth of work, which just
isn't the case.
People have been explaining quadratic system level increase, which is
not exponential,
wrong assumption.
Decentralisation is planned to scale down once the 133 connection limit is
hit. Like it or not, this is the current state of the code.
No people are not assuming decentralisation would decrease. They are assuming
the number of economically dependent full nodes would increase, that's where the
O( n^2 ) comes from! If we assume say c= 0.1% of users will run full nodes,
and users make some small-world assumed number of transactions that doesnt
increase greatly as more users are added to the network, then O( t * m
) => O( n^2 ).

Seeing decentralisation failing isn't a useful direction as Bitcoin depends on
decentralisation for most of it's useful security properties. People running
around saying great lets centralise Bitcoin and scale it, are not working on
Bitcoin. They may more usefully go work on competing systems without
proof of work as that's where this line of reasoning ends up. There
are companies working on such things. Some of them support Bitcoin IOUs.
Some of them have job openings.

We can improve decentralisation, and use bandwidth and relay improvements
to get some increase in throughput. But starting a direction of simplistic
thinking about an ever increasing block-size mode of thinking is destructive
and not Bitcoin. If you want to do that, you need to do it in an offchain
system. You cant build on sand so your offchain system wont be useful
if Bitcoin doesnt have reasonable decentralisation to retain useful meaning.
Hence lightning. There are existing layer 2 things that have on-chain netting.
Go work on one of those. But people need to understand the constraints
and stop arguing to break Bitcoin to "scale". It's too simplistic.

Even Gavin's proposal is not trying to do that, hence reference to
Nielsen's law.
His parameters are too high for too long for basic safety or prudence, but the
general idea to reclaim some throughput from network advances, is reasonable.
Also decentralisation is key, and that is something we can improve with pooling
protocols to phase out the artificial centralisation. We can also
educate people
to use fullnode they economically depend on to keep the full to SPV ratio
reasonable which is also needed for security.

Adam
Benjamin
2015-06-28 10:29:29 UTC
Permalink
I agree that naive scaling will likely lead to bad outcomes. They might
have the advantage though, as this would mean not changing Bitcoin.

Level2 and Lightning is not well defined. If you move money to a third
party, even if it is within the constrained of a locked contract, then I
don't think that will solve the issues. Blockchain does not know about
offchain and moving between offchain and onchain requires liquidity and a
pricing mechanism. That is exactly the problem with side-chains. If you
have off-chain transactions on an exchange, they are ID'ed in their system,
subject to KYC/AML.
Adam Back
2015-06-28 12:37:57 UTC
Permalink
I agree that naive scaling will likely lead to bad outcomes. They might have
the advantage though, as this would mean not changing Bitcoin.
Sure we can work incrementally and carefully, and this is exactly what
Bitcoin has been doing, and *must* do for safety and security for the
last 5 years!
That doesnt mean that useful serious improvements have not been made.
Level2 and Lightning is not well defined. If you move money to a third
party, even if it is within the constrained of a locked contract, then I
don't think that will solve the issues.
I think you misunderstand how lightning works. Every lightning
transaction *is* a valid bitcoin transaction that could be posted to
the Bitcoin network to reclaim funds if a hub went permanently
offline. It is just that while the hubs involved remain in service,
there is no need to do so. This is why it has been described as a
(write coalescing) write cache layer for Bitcoin.>

I believe people expect lightning to be peer 2 peer like bitcoin.

Adam
Raystonn .
2015-06-28 16:32:05 UTC
Permalink
Write coalescing works fine when you have multiple writes headed to the same
(contiguous) location. Will lightning be useful when we have more unique
transactions being sent to different addresses, and not just multiple
transactions between the same sender and address? I have doubts.


-----Original Message-----
From: Adam Back
Sent: Sunday, June 28, 2015 5:37 AM
To: Benjamin
Cc: bitcoin-***@lists.linuxfoundation.org
Subject: Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
I agree that naive scaling will likely lead to bad outcomes. They might have
the advantage though, as this would mean not changing Bitcoin.
Sure we can work incrementally and carefully, and this is exactly what
Bitcoin has been doing, and *must* do for safety and security for the
last 5 years!
That doesnt mean that useful serious improvements have not been made.
Level2 and Lightning is not well defined. If you move money to a third
party, even if it is within the constrained of a locked contract, then I
don't think that will solve the issues.
I think you misunderstand how lightning works. Every lightning
transaction *is* a valid bitcoin transaction that could be posted to
the Bitcoin network to reclaim funds if a hub went permanently
offline. It is just that while the hubs involved remain in service,
there is no need to do so. This is why it has been described as a
(write coalescing) write cache layer for Bitcoin.>

I believe people expect lightning to be peer 2 peer like bitcoin.

Adam
Mark Friedenbach
2015-06-28 17:12:35 UTC
Permalink
Think in terms of participants, not addresses. A participant in the
lightning network has a couple of connections to various hubs, from which
the participant is able to send or receive coin. The user is able to send
coins to anyone connected to the lightning network by means of an atomic
transaction through any path of the network. But the only payment from them
that ever hits the chain is their settlement with the hub.

Imagine there was a TCP/IP data chain and corresponding lightning network.
Everyone connected to the network has an "IP" channel with their ISP.
Through this channel they can send data to anywhere on the network, and a
traceroute shows what hops the data would take. But when settlement
actually occurs all the network sees is the net amount of data that has
gone through each segment -- without any context. There's no record
preserved on-chain of who sent data to whom, just that X bytes went through
the pipe on the way to somewhere unspecified.

So it is with lightning payment networks. You open a channel with a hub and
through that channel send coins to anyone accessible to the network.
Channels only close when a participant needs the funds for non-lightning
reasons, or when hubs need to rebalance. And when they do, observers on the
chain learn nothing more than how much net coin moved across that single
link. They learn nothing about where that coin eventually ended up.

So back to your original question, each channel can be considered to have a
pseudonymous identity, and each new channel given a new identity. Channel
closures can even be coinjoin'd when the other party is cooperating. But
ultimately, lightning usefully solves a problem where participants have
semi-long lived payment endpoints.
Post by Raystonn .
Write coalescing works fine when you have multiple writes headed to the
same (contiguous) location. Will lightning be useful when we have more
unique transactions being sent to different addresses, and not just
multiple transactions between the same sender and address? I have doubts.
-----Original Message----- From: Adam Back
Sent: Sunday, June 28, 2015 5:37 AM
To: Benjamin
Subject: Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
I agree that naive scaling will likely lead to bad outcomes. They might have
the advantage though, as this would mean not changing Bitcoin.
Sure we can work incrementally and carefully, and this is exactly what
Bitcoin has been doing, and *must* do for safety and security for the
last 5 years!
That doesnt mean that useful serious improvements have not been made.
Level2 and Lightning is not well defined. If you move money to a third
party, even if it is within the constrained of a locked contract, then I
don't think that will solve the issues.
I think you misunderstand how lightning works. Every lightning
transaction *is* a valid bitcoin transaction that could be posted to
the Bitcoin network to reclaim funds if a hub went permanently
offline. It is just that while the hubs involved remain in service,
there is no need to do so. This is why it has been described as a
(write coalescing) write cache layer for Bitcoin.>
I believe people expect lightning to be peer 2 peer like bitcoin.
Adam
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Benjamin
2015-06-28 17:18:03 UTC
Permalink
"You open a channel with a hub and through that channel send coins to
anyone accessible to the network."

Define hub *precisely* and you will find there are
some significant problems here.
a) does everyone know each other in the network? In Bitcoin transacting
parties exchange keys out of band. How do I know that Alice is owner of a
pubkey? I don't, and if don't know Alice I'm out of luck and can't transact
with here (or trust another PKI).
b) hubs need incentives. There are not going to put up collateral just for
nothing.
c) how is complexity reduced? I would speculate that most transactions are
one-time transactions in the time frame of days.

LT is a very interesting idea, but far from actual implementation.
Post by Mark Friedenbach
Think in terms of participants, not addresses. A participant in the
lightning network has a couple of connections to various hubs, from which
the participant is able to send or receive coin. The user is able to send
coins to anyone connected to the lightning network by means of an atomic
transaction through any path of the network. But the only payment from them
that ever hits the chain is their settlement with the hub.
Imagine there was a TCP/IP data chain and corresponding lightning network.
Everyone connected to the network has an "IP" channel with their ISP.
Through this channel they can send data to anywhere on the network, and a
traceroute shows what hops the data would take. But when settlement
actually occurs all the network sees is the net amount of data that has
gone through each segment -- without any context. There's no record
preserved on-chain of who sent data to whom, just that X bytes went through
the pipe on the way to somewhere unspecified.
So it is with lightning payment networks. You open a channel with a hub
and through that channel send coins to anyone accessible to the network.
Channels only close when a participant needs the funds for non-lightning
reasons, or when hubs need to rebalance. And when they do, observers on the
chain learn nothing more than how much net coin moved across that single
link. They learn nothing about where that coin eventually ended up.
So back to your original question, each channel can be considered to have
a pseudonymous identity, and each new channel given a new identity. Channel
closures can even be coinjoin'd when the other party is cooperating. But
ultimately, lightning usefully solves a problem where participants have
semi-long lived payment endpoints.
Post by Raystonn .
Write coalescing works fine when you have multiple writes headed to the
same (contiguous) location. Will lightning be useful when we have more
unique transactions being sent to different addresses, and not just
multiple transactions between the same sender and address? I have doubts.
-----Original Message----- From: Adam Back
Sent: Sunday, June 28, 2015 5:37 AM
To: Benjamin
Subject: Re: [bitcoin-dev] A Proposed Compromise to the Block Size Limit
I agree that naive scaling will likely lead to bad outcomes. They might have
the advantage though, as this would mean not changing Bitcoin.
Sure we can work incrementally and carefully, and this is exactly what
Bitcoin has been doing, and *must* do for safety and security for the
last 5 years!
That doesnt mean that useful serious improvements have not been made.
Level2 and Lightning is not well defined. If you move money to a third
party, even if it is within the constrained of a locked contract, then I
don't think that will solve the issues.
I think you misunderstand how lightning works. Every lightning
transaction *is* a valid bitcoin transaction that could be posted to
the Bitcoin network to reclaim funds if a hub went permanently
offline. It is just that while the hubs involved remain in service,
there is no need to do so. This is why it has been described as a
(write coalescing) write cache layer for Bitcoin.>
I believe people expect lightning to be peer 2 peer like bitcoin.
Adam
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Gavin Andresen
2015-06-28 17:29:10 UTC
Permalink
But ultimately, lightning usefully solves a problem where participants
have semi-long lived payment endpoints.
Very few of my own personal Bitcoin transactions fit that use-case.

In fact, very few of my own personal dollar transactions fit that use-case
(I suppose if I was addicted to Starbucks I'd have one of their payment
cards that I topped up every once in a while, which would map nicely onto a
payment channel). I suppose I could setup a payment channel with the
grocery store I shop at once a week, but that would be inconvenient (I'd
have to pre-fund it) and bad for my privacy.

I can see how payment channels would work between big financial
institutions as a settlement layer, but isn't that exactly the
centralization concern that is making a lot of people worried about
increasing the max block size?

And if there are only a dozen or two popular hubs, that's much worse
centralization-wise compared to a few thousand fully-validating Bitcoin
nodes.

Don't get me wrong, I think the Lightning Network is a fantastic idea and a
great experiment and will likely be used for all sorts of great payment
innovations (micropayments for bandwidth maybe, or maybe paying workers by
the hour instead of at the end of the month). But I don't think it is a
scaling solution for the types of payments the Bitcoin network is handling
today.
--
--
Gavin Andresen
Mark Friedenbach
2015-06-28 17:45:58 UTC
Permalink
Gavin, do you use a debit card or credit card? Then you do fit that use
case. When you buy a coffee at Starbucks, it is your bank that pays
Starbuck's bank. So it is with micropayment hubs.
But ultimately, lightning usefully solves a problem where participants
have semi-long lived payment endpoints.
Very few of my own personal Bitcoin transactions fit that use-case.

In fact, very few of my own personal dollar transactions fit that use-case
(I suppose if I was addicted to Starbucks I'd have one of their payment
cards that I topped up every once in a while, which would map nicely onto a
payment channel). I suppose I could setup a payment channel with the
grocery store I shop at once a week, but that would be inconvenient (I'd
have to pre-fund it) and bad for my privacy.

I can see how payment channels would work between big financial
institutions as a settlement layer, but isn't that exactly the
centralization concern that is making a lot of people worried about
increasing the max block size?

And if there are only a dozen or two popular hubs, that's much worse
centralization-wise compared to a few thousand fully-validating Bitcoin
nodes.

Don't get me wrong, I think the Lightning Network is a fantastic idea and a
great experiment and will likely be used for all sorts of great payment
innovations (micropayments for bandwidth maybe, or maybe paying workers by
the hour instead of at the end of the month). But I don't think it is a
scaling solution for the types of payments the Bitcoin network is handling
today.
--
--
Gavin Andresen
Adam Back
2015-06-28 17:51:00 UTC
Permalink
But ultimately, lightning usefully solves a problem where participants have semi-long lived payment endpoints.
Recipients do benefit from keeping connections to hubs because if a
hub goes away or a user abandons a hub that tends to generate new
on-chain traffic for balance reclaim, and new channel establishment,
as we understand the limits so far.
Very few of my own personal Bitcoin transactions fit that use-case.
I believe Mark is talking about the one hop (direct) connections
benefits from being long-lived; the payment destination is not
restricted in the same way. It's more like having a static IP address
with your ISP, that doesnt stop you reaching anywhere on the internet.

Say the Lightning Network has an average fan out of 10, now subject to
capital and rebalancing flows in the network you can pay anyone of a
billion people in 9 hops. Maybe the fanout is lumpy, with some bigger
hubs - that just serves to reduce the number of hops. Maybe there are
some capitalisation limits, that is dealt with by negative fees and
recirculation (more on that below) or failing that recapitalisation
on-chain. Some people assume that the hub will run out of
capitalisation on a given channel, however if people and hubs retain
redundant channels they can be paid to rebalance channels, and even
users can be paid by other users if there is a net flow from some
users, to a given business eg starbucks, where the users just buy new
BTC for USD and spend and dont earn BTC. Rebalancing would work
because the exchange where they buy new BTC would be incentivised to
pay starbucks (or whoever has excess coins on a channel) to send the
coins back to the users topping up by paying them negative fees,
because the fees to do that should be less than using on-chain
transactions.
But I don't think it is a scaling solution for the types of payments the Bitcoin
network is handling today.
Actually I think it may well be able to do that very well. We dont
know for sure how it will work until we see the balance and
effectiveness of the network algorithms against usage (eg simulating
from Bitcoin's historic usage say), but there's good reason to see
that BTC can recirculate and rebalance due to the reversible
non-expiring channels and capitalisation requirements can be lower
than simple expectation due higher velocity and redistribution of fees
to anyone with excess liquidity and connectivity heading in the right
direction.

Adam
Adam Back
2015-06-28 18:58:55 UTC
Permalink
This is probably going to sound impolite, but I think it's pertinent.

Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this,
given your recent proposals to unilaterally push the network into
quite dangerous areas of game theory, to lobby companies etc.

People are super polite and respectful around here, but this is not
looking good, if you don't mind me saying so. You can't make balanced
or informed trade-offs on block-size schedules stretching into the
future, if you don't understand work that is underway, and has been
for months. Lightning is a major candidate approach the rest of the
technical community sees for Bitcoin to scale.

Lightning allows Bitcoin to scale even without a block-size increase,
and therefore considerably impacts any calculation of how much
block-size is required. In this light you appear to have been
attempting to push through a change without even understanding the
alternatives or greater ecosystem.

Adam
Post by Adam Back
But ultimately, lightning usefully solves a problem where participants have semi-long lived payment endpoints.
Recipients do benefit from keeping connections to hubs because if a
hub goes away or a user abandons a hub that tends to generate new
on-chain traffic for balance reclaim, and new channel establishment,
as we understand the limits so far.
Very few of my own personal Bitcoin transactions fit that use-case.
I believe Mark is talking about the one hop (direct) connections
benefits from being long-lived; the payment destination is not
restricted in the same way. It's more like having a static IP address
with your ISP, that doesnt stop you reaching anywhere on the internet.
Say the Lightning Network has an average fan out of 10, now subject to
capital and rebalancing flows in the network you can pay anyone of a
billion people in 9 hops. Maybe the fanout is lumpy, with some bigger
hubs - that just serves to reduce the number of hops. Maybe there are
some capitalisation limits, that is dealt with by negative fees and
recirculation (more on that below) or failing that recapitalisation
on-chain. Some people assume that the hub will run out of
capitalisation on a given channel, however if people and hubs retain
redundant channels they can be paid to rebalance channels, and even
users can be paid by other users if there is a net flow from some
users, to a given business eg starbucks, where the users just buy new
BTC for USD and spend and dont earn BTC. Rebalancing would work
because the exchange where they buy new BTC would be incentivised to
pay starbucks (or whoever has excess coins on a channel) to send the
coins back to the users topping up by paying them negative fees,
because the fees to do that should be less than using on-chain
transactions.
But I don't think it is a scaling solution for the types of payments the Bitcoin
network is handling today.
Actually I think it may well be able to do that very well. We dont
know for sure how it will work until we see the balance and
effectiveness of the network algorithms against usage (eg simulating
from Bitcoin's historic usage say), but there's good reason to see
that BTC can recirculate and rebalance due to the reversible
non-expiring channels and capitalisation requirements can be lower
than simple expectation due higher velocity and redistribution of fees
to anyone with excess liquidity and connectivity heading in the right
direction.
Adam
Gavin Andresen
2015-06-28 21:05:10 UTC
Permalink
Post by Adam Back
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
then I doubt other people do, either. You need to do a better job of
explaining it.

But even if you could convince me that it WAS better from a
security/decentralization point of view:

a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.

b) The Lightning Network paper itself says bigger blocks will be needed
even if (especially if!) Lightning is wildly successful.
Michael Naber
2015-06-28 21:23:51 UTC
Permalink
Bitcoin Core exists to solve the global consensus problem. Global network
consensus means that there is global network recognition that a particular
transaction has occurred and is irreversible. Systems like hub-and-spoke,
payment channels, Lightning, etc. are useful, but they are not solutions to
the global consensus problem, because they do not meet this definition of
global consensus.

Let us focus our efforts on the goal of making Bitcoin Core the best
solution to the global consensus problem. Let us address Peter Todd’s
requirements to raise the block size limit to 8MB:

1) Run a successful test-net with 8MB blocks and show that the network
works and small miners are not unduly disadvantaged

2) Address Peter Todd's concern: “without scarcity of blockchain space
there is no reason to think that transaction fees won’t fall to the
marginal cost of including a transaction, which doesn’t leave anything to
pay for proof-of-work security”

Regarding 1: This is not done yet, though it seems reasonable enough to do.

Regarding 2: It is a fallacy to believe that artificially constraining
capacity of Bitcoin Core below the limits of technology will lead to
increased fees and therefore lead to sufficient security in the far-future.
Constraining capacity below the limits of technology will ultimately only
drive users seeking global consensus to solutions other than Bitcoin Core,
perhaps through a fork.

Demand for user access to high-capacity global consensus is real, and the
technology exists to deliver it; if we don't meet that demand in Bitcoin
Core, it's inevitably going to get met through some other product. Let's
not let that happen. Let's keep Bitcoin Core the best solution to the
global consensus problem.

Thoughts? Is there anything else not mentioned above which anyone would
like done in order to raise the block size to a static 8 MB?
Post by Gavin Andresen
Post by Adam Back
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
then I doubt other people do, either. You need to do a better job of
explaining it.
But even if you could convince me that it WAS better from a
a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.
b) The Lightning Network paper itself says bigger blocks will be needed
even if (especially if!) Lightning is wildly successful.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Adam Back
2015-06-28 22:07:11 UTC
Permalink
Post by Gavin Andresen
Post by Adam Back
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
Its a source routed network, not a broadcast network. Fees are
charged on channels so
DoS is just a way to pay people a multiple of bandwidth cost.
Post by Gavin Andresen
I don't mind a set of central authorities being part of an option IF the central authority
doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
to trust them to not collude with anyone to reverse your payments or destroy the trust
in the system in some attack. On the Lightning network, a large hub can't steal my
money.
I think most people share the sentiment that trustlessness is what matters and
decentralization is just a synonym for trustlessness when talking about the blockchain
and mining, however decentralization isn't necessarily synonymous with trustlessness
nor is centralization synonymous with trust-requiring when you're talking about
something else.
then I doubt other people do, either. You need to do a better job of explaining it.
I gave it a go a couple of posts up. I didnt realise people here
proposing mega-blocks were not paying attention to the whole lightning
concept and detail.

People said lots of things about how it's better to work on lightning,
to scale algorithmically, rather than increasing block-size to
dangerously centralising proportions.
Did you think we were Gish Galloping you? We were completely serious.

The paper is on http://lightning.network

though it is not so clearly explained there, however Joseph is working
on improving the paper as I understand it.

Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450

though I don't recall that he got into recirculation, negative fees
etc. A good question
for the lightning-dev mailing list maybe.

http://lists.linuxfoundation.org/pipermail/lightning-dev/

There are a couple of recorded presentation videos / podcasts from Joseph Poon.

sf bitcoin dev presentation:



epicenter bitcoin:



There's a related paper from Christian Decker "Duplex Micropayment Channels"

http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
Post by Gavin Andresen
But even if you could convince me that it WAS better from a
We don't need to convince people, we just have to code it and
demonstrate it, which people are working on.

But Lightning does need a decentralised and secure Bitcoin network for
anchor and reclaim transactions, so take it easy with the mega-blocks
in the mean-time.
Post by Gavin Andresen
a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.
maybe you want to check in on

https://github.com/ElementsProject/lightning

and help code it.

I expect we can get something running inside a year. Which kind of
obviates the burning "need" for a schedule into the far future rising
to 8GB with unrealistic bandwidth growth assumptions that will surely
cause centralisation problems.

For block-size I think it would be better to have a 2-4 year or one
off size bump with policy limits and then re-evaluate after we've seen
what lightning can do.

I have been saying the same thing ad-nauseam for weeks.
Post by Gavin Andresen
b) The Lightning Network paper itself says bigger blocks will be needed even
if (especially if!) Lightning is wildly successful.
Not nearly as big as if you tried to put the transactions it would
enable on the chain, that's for sure! We dont know what that limit is
but people have been imagining 1,000 or 10,000 transactions per anchor
transaction. If micro-payments get popular many more.

Basically users would park Bitcoins a on a hub channel instead of the
blockchain. The channel can stay up indefinitely, and the user has
assurances analogous to greenaddress time-lock mechanism

Flexcap maybe a better solution because that allows bursting
block-size when economically rational.

Note that the time-locks with lightning are assumed to be relative
CTLV eg using the mechanism as Mark Friedenbach described in a post
here, and as implemented in the elements sidechain, so there is not a
huge rush to reclaim funds. They can be spread out in time.

If you want to scale Bitcoin - like really scale it - work on
lightning. Lightning + a decentralised and secure Bitcoin, scales
further and is more trustless than Bitcoin forced into centralisation
via premature mega-blocks.

To my mind a shorter, more conservative block-size increase to give a
few years room is enough for now. We'll be in a better position to
know what the right next step is after lightning is running.

Something to mention is you can elide transactions before reclaiming.
So long as the balancing transaction is correct, someone online can
swap it for you with an equal balance one with less hops of
intermediate payment flows.


It's pretty interesting what you can do already. I'm fairly confident
we're not finished algorithmically optimising it either. It's
surprising how much new territory there is just sitting there
unexplored.

Adam
Eric Lombrozo
2015-06-29 00:59:40 UTC
Permalink
There’s no question that a flooding mesh network requiring global consensus for every transactions is not the way. It’s also clear that a routable protocol capable of compensating hubs is basically the holy grail.

So what’s there to discuss?

- Eric
Post by Adam Back
Post by Gavin Andresen
Post by Adam Back
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
Its a source routed network, not a broadcast network. Fees are
charged on channels so
DoS is just a way to pay people a multiple of bandwidth cost.
Post by Gavin Andresen
I don't mind a set of central authorities being part of an option IF the central authority
doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
to trust them to not collude with anyone to reverse your payments or destroy the trust
in the system in some attack. On the Lightning network, a large hub can't steal my
money.
I think most people share the sentiment that trustlessness is what matters and
decentralization is just a synonym for trustlessness when talking about the blockchain
and mining, however decentralization isn't necessarily synonymous with trustlessness
nor is centralization synonymous with trust-requiring when you're talking about
something else.
then I doubt other people do, either. You need to do a better job of explaining it.
I gave it a go a couple of posts up. I didnt realise people here
proposing mega-blocks were not paying attention to the whole lightning
concept and detail.
People said lots of things about how it's better to work on lightning,
to scale algorithmically, rather than increasing block-size to
dangerously centralising proportions.
Did you think we were Gish Galloping you? We were completely serious.
The paper is on http://lightning.network
though it is not so clearly explained there, however Joseph is working
on improving the paper as I understand it.
Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
though I don't recall that he got into recirculation, negative fees
etc. A good question
for the lightning-dev mailing list maybe.
http://lists.linuxfoundation.org/pipermail/lightning-dev/
There are a couple of recorded presentation videos / podcasts from Joseph Poon.
http://youtu.be/2QH5EV_Io0E
http://youtu.be/fBS_ieDwQ9k
There's a related paper from Christian Decker "Duplex Micropayment Channels"
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
Post by Gavin Andresen
But even if you could convince me that it WAS better from a
We don't need to convince people, we just have to code it and
demonstrate it, which people are working on.
But Lightning does need a decentralised and secure Bitcoin network for
anchor and reclaim transactions, so take it easy with the mega-blocks
in the mean-time.
Post by Gavin Andresen
a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.
maybe you want to check in on
https://github.com/ElementsProject/lightning
and help code it.
I expect we can get something running inside a year. Which kind of
obviates the burning "need" for a schedule into the far future rising
to 8GB with unrealistic bandwidth growth assumptions that will surely
cause centralisation problems.
For block-size I think it would be better to have a 2-4 year or one
off size bump with policy limits and then re-evaluate after we've seen
what lightning can do.
I have been saying the same thing ad-nauseam for weeks.
Post by Gavin Andresen
b) The Lightning Network paper itself says bigger blocks will be needed even
if (especially if!) Lightning is wildly successful.
Not nearly as big as if you tried to put the transactions it would
enable on the chain, that's for sure! We dont know what that limit is
but people have been imagining 1,000 or 10,000 transactions per anchor
transaction. If micro-payments get popular many more.
Basically users would park Bitcoins a on a hub channel instead of the
blockchain. The channel can stay up indefinitely, and the user has
assurances analogous to greenaddress time-lock mechanism
Flexcap maybe a better solution because that allows bursting
block-size when economically rational.
Note that the time-locks with lightning are assumed to be relative
CTLV eg using the mechanism as Mark Friedenbach described in a post
here, and as implemented in the elements sidechain, so there is not a
huge rush to reclaim funds. They can be spread out in time.
If you want to scale Bitcoin - like really scale it - work on
lightning. Lightning + a decentralised and secure Bitcoin, scales
further and is more trustless than Bitcoin forced into centralisation
via premature mega-blocks.
To my mind a shorter, more conservative block-size increase to give a
few years room is enough for now. We'll be in a better position to
know what the right next step is after lightning is running.
Something to mention is you can elide transactions before reclaiming.
So long as the balancing transaction is correct, someone online can
swap it for you with an equal balance one with less hops of
intermediate payment flows.
It's pretty interesting what you can do already. I'm fairly confident
we're not finished algorithmically optimising it either. It's
surprising how much new territory there is just sitting there
unexplored.
Adam
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Eric Lombrozo
2015-06-29 01:13:29 UTC
Permalink
The Lightning network is essentially a contract negotiation scheme that rewards cooperation. Defection amounts to either broadcasting early or not responding to signature requests. If done right, either of these situations incurs a bigger cost to the uncooperative party than cooperation. This is why I say blockchains are like a fix to the prisoner’s dilemma.

The blockchain becomes essentially a dispute resolution mechanism and a way to anchor stuff. There’s no use case covered by the current method of “flood the entire network and confirm on blockchain” that can’t be covered by a method of “participate in a contract which guarantees me payment on the blockchain if anyone is uncooperative but which rarely requires touching the blockchain” methinks.


- Eric Lombrozo
Post by Adam Back
Post by Gavin Andresen
Post by Adam Back
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
Its a source routed network, not a broadcast network. Fees are
charged on channels so
DoS is just a way to pay people a multiple of bandwidth cost.
Post by Gavin Andresen
I don't mind a set of central authorities being part of an option IF the central authority
doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
to trust them to not collude with anyone to reverse your payments or destroy the trust
in the system in some attack. On the Lightning network, a large hub can't steal my
money.
I think most people share the sentiment that trustlessness is what matters and
decentralization is just a synonym for trustlessness when talking about the blockchain
and mining, however decentralization isn't necessarily synonymous with trustlessness
nor is centralization synonymous with trust-requiring when you're talking about
something else.
then I doubt other people do, either. You need to do a better job of explaining it.
I gave it a go a couple of posts up. I didnt realise people here
proposing mega-blocks were not paying attention to the whole lightning
concept and detail.
People said lots of things about how it's better to work on lightning,
to scale algorithmically, rather than increasing block-size to
dangerously centralising proportions.
Did you think we were Gish Galloping you? We were completely serious.
The paper is on http://lightning.network
though it is not so clearly explained there, however Joseph is working
on improving the paper as I understand it.
Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
though I don't recall that he got into recirculation, negative fees
etc. A good question
for the lightning-dev mailing list maybe.
http://lists.linuxfoundation.org/pipermail/lightning-dev/
There are a couple of recorded presentation videos / podcasts from Joseph Poon.
http://youtu.be/2QH5EV_Io0E
http://youtu.be/fBS_ieDwQ9k
There's a related paper from Christian Decker "Duplex Micropayment Channels"
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
Post by Gavin Andresen
But even if you could convince me that it WAS better from a
We don't need to convince people, we just have to code it and
demonstrate it, which people are working on.
But Lightning does need a decentralised and secure Bitcoin network for
anchor and reclaim transactions, so take it easy with the mega-blocks
in the mean-time.
Post by Gavin Andresen
a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.
maybe you want to check in on
https://github.com/ElementsProject/lightning
and help code it.
I expect we can get something running inside a year. Which kind of
obviates the burning "need" for a schedule into the far future rising
to 8GB with unrealistic bandwidth growth assumptions that will surely
cause centralisation problems.
For block-size I think it would be better to have a 2-4 year or one
off size bump with policy limits and then re-evaluate after we've seen
what lightning can do.
I have been saying the same thing ad-nauseam for weeks.
Post by Gavin Andresen
b) The Lightning Network paper itself says bigger blocks will be needed even
if (especially if!) Lightning is wildly successful.
Not nearly as big as if you tried to put the transactions it would
enable on the chain, that's for sure! We dont know what that limit is
but people have been imagining 1,000 or 10,000 transactions per anchor
transaction. If micro-payments get popular many more.
Basically users would park Bitcoins a on a hub channel instead of the
blockchain. The channel can stay up indefinitely, and the user has
assurances analogous to greenaddress time-lock mechanism
Flexcap maybe a better solution because that allows bursting
block-size when economically rational.
Note that the time-locks with lightning are assumed to be relative
CTLV eg using the mechanism as Mark Friedenbach described in a post
here, and as implemented in the elements sidechain, so there is not a
huge rush to reclaim funds. They can be spread out in time.
If you want to scale Bitcoin - like really scale it - work on
lightning. Lightning + a decentralised and secure Bitcoin, scales
further and is more trustless than Bitcoin forced into centralisation
via premature mega-blocks.
To my mind a shorter, more conservative block-size increase to give a
few years room is enough for now. We'll be in a better position to
know what the right next step is after lightning is running.
Something to mention is you can elide transactions before reclaiming.
So long as the balancing transaction is correct, someone online can
swap it for you with an equal balance one with less hops of
intermediate payment flows.
It's pretty interesting what you can do already. I'm fairly confident
we're not finished algorithmically optimising it either. It's
surprising how much new territory there is just sitting there
unexplored.
Adam
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Andy Schroder
2015-06-29 01:45:09 UTC
Permalink
Regarding privacy and the lightening network. Has this been well
addressed? I haven't seen much that leads me to believe there is. Only
options I see are to have many open payment channels, but that is still
limiting and inefficient, or require an extensive number of hops in your
payment route, but this is also limiting.



Andy Schroder
Post by Adam Back
Post by Gavin Andresen
Post by Adam Back
This is probably going to sound impolite, but I think it's pertinent.
Gavin, on dwelling on the the fact that you appear to not understand
the basics of the lightning network, I am a little alarmed about this
If I don't see how switching from using the thousands of fully-validating
bitcoin nodes with (tens? hundreds?) of Lightning Network hubs is better in
terms of decentralization (or security, in terms of Sybil/DoS attacks),
Its a source routed network, not a broadcast network. Fees are
charged on channels so
DoS is just a way to pay people a multiple of bandwidth cost.
Post by Gavin Andresen
I don't mind a set of central authorities being part of an option IF the central authority
doesn't need to be trusted. On the blockchain, the larger miner is, the more you have
to trust them to not collude with anyone to reverse your payments or destroy the trust
in the system in some attack. On the Lightning network, a large hub can't steal my
money.
I think most people share the sentiment that trustlessness is what matters and
decentralization is just a synonym for trustlessness when talking about the blockchain
and mining, however decentralization isn't necessarily synonymous with trustlessness
nor is centralization synonymous with trust-requiring when you're talking about
something else.
then I doubt other people do, either. You need to do a better job of explaining it.
I gave it a go a couple of posts up. I didnt realise people here
proposing mega-blocks were not paying attention to the whole lightning
concept and detail.
People said lots of things about how it's better to work on lightning,
to scale algorithmically, rather than increasing block-size to
dangerously centralising proportions.
Did you think we were Gish Galloping you? We were completely serious.
The paper is on http://lightning.network
though it is not so clearly explained there, however Joseph is working
on improving the paper as I understand it.
Rusty wrote a high-level blog explainer: http://rusty.ozlabs.org/?p=450
though I don't recall that he got into recirculation, negative fees
etc. A good question
for the lightning-dev mailing list maybe.
http://lists.linuxfoundation.org/pipermail/lightning-dev/
There are a couple of recorded presentation videos / podcasts from Joseph Poon.
http://youtu.be/2QH5EV_Io0E
http://youtu.be/fBS_ieDwQ9k
There's a related paper from Christian Decker "Duplex Micropayment Channels"
http://www.tik.ee.ethz.ch/file/716b955c130e6c703fac336ea17b1670/duplex-micropayment-channels.pdf
Post by Gavin Andresen
But even if you could convince me that it WAS better from a
We don't need to convince people, we just have to code it and
demonstrate it, which people are working on.
But Lightning does need a decentralised and secure Bitcoin network for
anchor and reclaim transactions, so take it easy with the mega-blocks
in the mean-time.
Post by Gavin Andresen
a) Lightning Network is nothing but a whitepaper right now. We are a long
way from a practical implementation supported by even one wallet.
maybe you want to check in on
https://github.com/ElementsProject/lightning
and help code it.
I expect we can get something running inside a year. Which kind of
obviates the burning "need" for a schedule into the far future rising
to 8GB with unrealistic bandwidth growth assumptions that will surely
cause centralisation problems.
For block-size I think it would be better to have a 2-4 year or one
off size bump with policy limits and then re-evaluate after we've seen
what lightning can do.
I have been saying the same thing ad-nauseam for weeks.
Post by Gavin Andresen
b) The Lightning Network paper itself says bigger blocks will be needed even
if (especially if!) Lightning is wildly successful.
Not nearly as big as if you tried to put the transactions it would
enable on the chain, that's for sure! We dont know what that limit is
but people have been imagining 1,000 or 10,000 transactions per anchor
transaction. If micro-payments get popular many more.
Basically users would park Bitcoins a on a hub channel instead of the
blockchain. The channel can stay up indefinitely, and the user has
assurances analogous to greenaddress time-lock mechanism
Flexcap maybe a better solution because that allows bursting
block-size when economically rational.
Note that the time-locks with lightning are assumed to be relative
CTLV eg using the mechanism as Mark Friedenbach described in a post
here, and as implemented in the elements sidechain, so there is not a
huge rush to reclaim funds. They can be spread out in time.
If you want to scale Bitcoin - like really scale it - work on
lightning. Lightning + a decentralised and secure Bitcoin, scales
further and is more trustless than Bitcoin forced into centralisation
via premature mega-blocks.
To my mind a shorter, more conservative block-size increase to give a
few years room is enough for now. We'll be in a better position to
know what the right next step is after lightning is running.
Something to mention is you can elide transactions before reclaiming.
So long as the balancing transaction is correct, someone online can
swap it for you with an equal balance one with less hops of
intermediate payment flows.
It's pretty interesting what you can do already. I'm fairly confident
we're not finished algorithmically optimising it either. It's
surprising how much new territory there is just sitting there
unexplored.
Adam
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Tom Harding
2015-06-30 00:42:36 UTC
Permalink
We dont know what that limit is but people have been imagining 1,000
or 10,000 transactions per anchor transaction. Basically users would
park Bitcoins a on a hub channel instead of the blockchain.
This re-introduces a solved problem (solved by bitcoin better than
anything else) - worrying whether your "payment hub" actually connects
to whom you wish to pay.

There will be enormous network effects and centralization pressure in
the payment-hub space. A few entities, maybe single entity, should be
expected to quickly corner the market and own the whole thing.

This concept is far too untested to justify amateur economic meddling in
the bitcoin fee market by setting a restrictive hard cap below technical
feasibility.

I can guess exactly who would want to keep bitcoin from improving:
*those who hope to be the future payment hub oligarchs*.
Tom Harding
2015-07-10 02:55:15 UTC
Permalink
Post by Adam Back
Post by Gavin Andresen
Post by Raystonn .
Write coalescing works fine when you have multiple writes headed to
the same (contiguous) location. Will lightning be useful when we
have more unique transactions being sent to different addresses, and
not just multiple transactions between the same sender and address?
I have doubts.
Don't get me wrong, I think the Lightning Network is a fantastic idea
and a great experiment and will likely be used for all sorts of great
payment innovations (micropayments for bandwidth maybe, or maybe
paying workers by the hour instead of at the end of the month). But I
don't think it is a scaling solution for the types of payments the
Bitcoin network is handling today.
Lightning allows Bitcoin to scale even without a block-size increase,
and therefore considerably impacts any calculation of how much
block-size is required. In this light you appear to have been
attempting to push through a change without even understanding the
alternatives or greater ecosystem.
Lightning Network (LN) does not "allow Bitcoin to scale". LN is a
bitcoin application. The properties of LN are dependent on bitcoin, but
they are distinct from bitcoin.

In particular, an under-appreciated aspect of LN is that in order for
your interactions to be consolidated and consume less blockchain space,
you must give up significant control of the money you send AND the money
you receive.

If either sender or receiver wants to record a transaction in the
blockchain immediately, there is no space savings versus bitcoin. More
blockchain space is actually, used, due to LN overhead.

If both sender and receiver are willing to delay recording in the
blockchain, then the situation is analogous to using banks. Sender's
hub pays from sender channel, to receiver channel at receiver's hub.

Neither side fully relinquishes custody of the money in their multisig
payment hub channels -- this is an improvement on traditional bank
accounts -- BUT...

- Sender is required to lock funds under his hub's signature - this is
well discussed
- Less well discussed: *to achieve any consolidation at all, receiver
must ALSO be willing to lock received funds under his hub's signature*

I'll put it another way. LN only "solves" the scaling problem if
receiver's hub has pre-commited sufficient funds to cover the receipts,
AND if receiver endures for a period of time -- directly related to the
scaling factor -- being unable to spend money received UNLESS his
payment hub signs off on his spend instructions.

Jorge Timón
2015-06-28 17:53:40 UTC
Permalink
Post by Gavin Andresen
But ultimately, lightning usefully solves a problem where participants
have semi-long lived payment endpoints.
Very few of my own personal Bitcoin transactions fit that use-case.
In fact, very few of my own personal dollar transactions fit that use-case
(I suppose if I was addicted to Starbucks I'd have one of their payment
cards that I topped up every once in a while, which would map nicely onto a
payment channel). I suppose I could setup a payment channel with the grocery
store I shop at once a week, but that would be inconvenient (I'd have to
pre-fund it) and bad for my privacy.
Unlike other payment channels designs, the lightning payment channel
network allows you to pay to people that you haven't sent a pre-fund
to.
There's must be a path in the network from you to the payee.
That's simpler with only a few hubs although too few hubs is bad for privacy.
Post by Gavin Andresen
I can see how payment channels would work between big financial institutions
as a settlement layer, but isn't that exactly the centralization concern
that is making a lot of people worried about increasing the max block size?
Worried about financial institutions using Bitcoin? No. Who said that?
Post by Gavin Andresen
And if there are only a dozen or two popular hubs, that's much worse
centralization-wise compared to a few thousand fully-validating Bitcoin
nodes.
Remember the hubs cannot steal any coins.
Post by Gavin Andresen
Don't get me wrong, I think the Lightning Network is a fantastic idea and a
great experiment and will likely be used for all sorts of great payment
innovations (micropayments for bandwidth maybe, or maybe paying workers by
the hour instead of at the end of the month). But I don't think it is a
scaling solution for the types of payments the Bitcoin network is handling
today.
I don't see how people could pay coffees with bitcoin in the long term
otherwise.
Bitcoin IOUs from a third party (or federation) maybe, but not with
real p2p btc.
Andrew Lapp
2015-06-28 19:22:53 UTC
Permalink
I don't mind a set of central authorities being part of an option IF the
central authority doesn't need to be trusted. On the blockchain, the
larger miner is, the more you have to trust them to not collude with
anyone to reverse your payments or destroy the trust in the system in
some attack. On the Lightning network, a large hub can't steal my money.

I think most people share the sentiment that trustlessness is what
matters and decentralization is just a synonym for trustlessness when
talking about the blockchain and mining, however decentralization isn't
necessarily synonymous with trustlessness nor is centralization
synonymous with trust-requiring when you're talking about something else.

-Andrew Lapp
Post by Gavin Andresen
I can see how payment channels would work between big financial
institutions as a settlement layer, but isn't that exactly the
centralization concern that is making a lot of people worried about
increasing the max block size?
Benjamin
2015-06-28 19:40:20 UTC
Permalink
"On the Lightning network, a large hub can't steal my money." Malicious
hubs could flood the network. The way it is discussed now it's not
resistant to Sybil attack either. It's an interesting idea in a very early
stage. Not at all a drop-in replacement for Bitcoin anytime soon, as some
imply. Blockstream shouldn't make these issues into pitches of their own
tech of their for-profit enterprise.
Post by Andrew Lapp
I don't mind a set of central authorities being part of an option IF the
central authority doesn't need to be trusted. On the blockchain, the larger
miner is, the more you have to trust them to not collude with anyone to
reverse your payments or destroy the trust in the system in some attack. On
the Lightning network, a large hub can't steal my money.
I think most people share the sentiment that trustlessness is what matters
and decentralization is just a synonym for trustlessness when talking about
the blockchain and mining, however decentralization isn't necessarily
synonymous with trustlessness nor is centralization synonymous with
trust-requiring when you're talking about something else.
-Andrew Lapp
Post by Gavin Andresen
I can see how payment channels would work between big financial
institutions as a settlement layer, but isn't that exactly the
centralization concern that is making a lot of people worried about
increasing the max block size?
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Milly Bitcoin
2015-06-28 12:32:52 UTC
Permalink
Post by Adam Back
Also decentralisation is key, and that is something we can improve
with pooling protocols to phase out the artificial centralisation.

So how is the level of decentralization measured? I see many claims on
this list that such-and-such action will increase or decrease
centralization and sometimes people talk in absolutes such as something
being decentralized or centralized. Some of the arguments seem to make
claims without providing any kind of analysis or explanation.

Nothing is truly decentralized and decentralization is just an
approximation of having a collection of centralized systems interact in
some way. I would suggest coming up with some sort of metric so these
discussions can start from a baseline when discussing changes.

Russ
Continue reading on narkive:
Loading...