Discussion:
Hard fork proposal from last week's meeting
(too old to reply)
Wang Chun via bitcoin-dev
2017-03-28 16:59:32 UTC
Permalink
Raw Message
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.

The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.

Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.

With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.

We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.

Anyway, we must code something right now, before it becomes too late.
Matt Corallo via bitcoin-dev
2017-03-28 17:13:09 UTC
Permalink
Raw Message
Not sure what "last week's meeting" is in reference to?

Agreed that the hard fork should be well-prepared, but I think its
dangerous to think that a hard fork as agreed upon would be a simple
relaxation of the block size. For example, Johnson Lau's previous
proposal, Spoonnet, which I think is probably one of the better ones,
would be incompatible with these rules.

I, of course, worry about what happens if we cannot come to consensus on
a number to soft fork down to, potentially significantly risking miner
profits (and, thus, the security of Bitcoin) if a group is able to keep
things "at the status quo". That said, for that to be alleviated we
could simply do something based on historical transaction growth (which
is somewhat linear, with a few inflection points), but that number ends
up being super low (eg somewhere around 2MB at the next halving, which
SegWit itself already provides :/.

We could, of course, focus on designing a hard fork's activation and
technical details, with a very large block size increase in it (ie
closer to 4/6MB at the next halving or so, something we at least could
be confident we could develop software for), with intention to soft fork
it back down if miner profits are suffering.

Matt

On 03/28/17 16:59, Wang Chun via bitcoin-dev wrote:
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Jared Lee Richardson via bitcoin-dev
2017-03-29 08:45:14 UTC
Permalink
Raw Message
> That said, for that to be alleviated we
could simply do something based on historical transaction growth (which
is somewhat linear, with a few inflection points),

Where do you get this? Transaction growth for the last 4 years averages to
+65% per year and the last 2 is +80% per year. That's very much not linear.



On Tue, Mar 28, 2017 at 10:13 AM, Matt Corallo via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> Not sure what "last week's meeting" is in reference to?
>
> Agreed that the hard fork should be well-prepared, but I think its
> dangerous to think that a hard fork as agreed upon would be a simple
> relaxation of the block size. For example, Johnson Lau's previous
> proposal, Spoonnet, which I think is probably one of the better ones,
> would be incompatible with these rules.
>
> I, of course, worry about what happens if we cannot come to consensus on
> a number to soft fork down to, potentially significantly risking miner
> profits (and, thus, the security of Bitcoin) if a group is able to keep
> things "at the status quo". That said, for that to be alleviated we
> could simply do something based on historical transaction growth (which
> is somewhat linear, with a few inflection points), but that number ends
> up being super low (eg somewhere around 2MB at the next halving, which
> SegWit itself already provides :/.
>
> We could, of course, focus on designing a hard fork's activation and
> technical details, with a very large block size increase in it (ie
> closer to 4/6MB at the next halving or so, something we at least could
> be confident we could develop software for), with intention to soft fork
> it back down if miner profits are suffering.
>
> Matt
>
> On 03/28/17 16:59, Wang Chun via bitcoin-dev wrote:
> > I've proposed this hard fork approach last year in Hong Kong Consensus
> > but immediately rejected by coredevs at that meeting, after more than
> > one year it seems that lots of people haven't heard of it. So I would
> > post this here again for comment.
> >
> > The basic idea is, as many of us agree, hard fork is risky and should
> > be well prepared. We need a long time to deploy it.
> >
> > Despite spam tx on the network, the block capacity is approaching its
> > limit, and we must think ahead. Shall we code a patch right now, to
> > remove the block size limit of 1MB, but not activate it until far in
> > the future. I would propose to remove the 1MB limit at the next block
> > halving in spring 2020, only limit the block size to 32MiB which is
> > the maximum size the current p2p protocol allows. This patch must be
> > in the immediate next release of Bitcoin Core.
> >
> > With this patch in core's next release, Bitcoin works just as before,
> > no fork will ever occur, until spring 2020. But everyone knows there
> > will be a fork scheduled. Third party services, libraries, wallets and
> > exchanges will have enough time to prepare for it over the next three
> > years.
> >
> > We don't yet have an agreement on how to increase the block size
> > limit. There have been many proposals over the past years, like
> > BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> > on. These hard fork proposals, with this patch already in Core's
> > release, they all become soft fork. We'll have enough time to discuss
> > all these proposals and decide which one to go. Take an example, if we
> > choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> > from 32MiB to 2MB will be a soft fork.
> >
> > Anyway, we must code something right now, before it becomes too late.
> > _______________________________________________
> > bitcoin-dev mailing list
> > bitcoin-***@lists.linuxfoundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Alphonse Pace via bitcoin-dev
2017-03-28 17:23:31 UTC
Permalink
Raw Message
What meeting are you referring to? Who were the participants?

Removing the limit but relying on the p2p protocol is not really a true
32MiB limit, but a limit of whatever transport methods provide. This can
lead to differing consensus if alternative layers for relaying are used.
What you seem to be asking for is an unbound block size (or at least
determined by whatever miners produce). This has the possibility (and even
likelihood) of removing many participants from the network, including many
small miners.

32MB in less than 3 years also appears to be far beyond limits of safety
which are known to exist far sooner, and we cannot expect hardware and
networking layers to improve by those amounts in that time.

It also seems like it would be much better to wait until SegWit activates
in order to truly measure the effects on the network from this increased
capacity before committing to any additional increases.

-Alphonse



On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Wang Chun via bitcoin-dev
2017-03-28 17:31:36 UTC
Permalink
Raw Message
The basic idea is, let's stop the debate for whether we should upgrade
to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
so any final decision would be a soft fork to this already deployed
release. If by 2020, we still agree 1MB is enough, it can be changed
back to 1MB limit and it would also a soft fork on top of that.

On Wed, Mar 29, 2017 at 1:23 AM, Alphonse Pace <***@gmail.com> wrote:
> What meeting are you referring to? Who were the participants?
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
> It also seems like it would be much better to wait until SegWit activates in
> order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> <bitcoin-***@lists.linuxfoundation.org> wrote:
>>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-***@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Jeremy via bitcoin-dev
2017-03-28 17:33:31 UTC
Permalink
Raw Message
I think it's probably safer to have a fork-to-minumum (e.g. minimal
coinbase+header) after a certain date than to fork up at a certain date. At
least in that case, the default isn't breaking consensus, but you still get
the same pressure to fork to a permanent solution.

I don't endorse the above proposal, but remarked for the sake of guiding
the argument you are making.


--
@JeremyRubin <https://twitter.com/JeremyRubin>
<https://twitter.com/JeremyRubin>

On Tue, Mar 28, 2017 at 1:31 PM, Wang Chun via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> The basic idea is, let's stop the debate for whether we should upgrade
> to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
> so any final decision would be a soft fork to this already deployed
> release. If by 2020, we still agree 1MB is enough, it can be changed
> back to 1MB limit and it would also a soft fork on top of that.
>
> On Wed, Mar 29, 2017 at 1:23 AM, Alphonse Pace <***@gmail.com>
> wrote:
> > What meeting are you referring to? Who were the participants?
> >
> > Removing the limit but relying on the p2p protocol is not really a true
> > 32MiB limit, but a limit of whatever transport methods provide. This can
> > lead to differing consensus if alternative layers for relaying are used.
> > What you seem to be asking for is an unbound block size (or at least
> > determined by whatever miners produce). This has the possibility (and
> even
> > likelihood) of removing many participants from the network, including
> many
> > small miners.
> >
> > 32MB in less than 3 years also appears to be far beyond limits of safety
> > which are known to exist far sooner, and we cannot expect hardware and
> > networking layers to improve by those amounts in that time.
> >
> > It also seems like it would be much better to wait until SegWit
> activates in
> > order to truly measure the effects on the network from this increased
> > capacity before committing to any additional increases.
> >
> > -Alphonse
> >
> >
> >
> > On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> > <bitcoin-***@lists.linuxfoundation.org> wrote:
> >>
> >> I've proposed this hard fork approach last year in Hong Kong Consensus
> >> but immediately rejected by coredevs at that meeting, after more than
> >> one year it seems that lots of people haven't heard of it. So I would
> >> post this here again for comment.
> >>
> >> The basic idea is, as many of us agree, hard fork is risky and should
> >> be well prepared. We need a long time to deploy it.
> >>
> >> Despite spam tx on the network, the block capacity is approaching its
> >> limit, and we must think ahead. Shall we code a patch right now, to
> >> remove the block size limit of 1MB, but not activate it until far in
> >> the future. I would propose to remove the 1MB limit at the next block
> >> halving in spring 2020, only limit the block size to 32MiB which is
> >> the maximum size the current p2p protocol allows. This patch must be
> >> in the immediate next release of Bitcoin Core.
> >>
> >> With this patch in core's next release, Bitcoin works just as before,
> >> no fork will ever occur, until spring 2020. But everyone knows there
> >> will be a fork scheduled. Third party services, libraries, wallets and
> >> exchanges will have enough time to prepare for it over the next three
> >> years.
> >>
> >> We don't yet have an agreement on how to increase the block size
> >> limit. There have been many proposals over the past years, like
> >> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> >> on. These hard fork proposals, with this patch already in Core's
> >> release, they all become soft fork. We'll have enough time to discuss
> >> all these proposals and decide which one to go. Take an example, if we
> >> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> >> from 32MiB to 2MB will be a soft fork.
> >>
> >> Anyway, we must code something right now, before it becomes too late.
> >> _______________________________________________
> >> bitcoin-dev mailing list
> >> bitcoin-***@lists.linuxfoundation.org
> >> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> >
> >
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Douglas Roark via bitcoin-dev
2017-03-28 17:50:42 UTC
Permalink
Raw Message
On 2017/3/28 10:31, Wang Chun via bitcoin-dev wrote:
> The basic idea is, let's stop the debate for whether we should upgrade
> to 2MB, 8MB or 32MiB. 32MiB is well above any proposals' upper limit,
> so any final decision would be a soft fork to this already deployed
> release. If by 2020, we still agree 1MB is enough, it can be changed
> back to 1MB limit and it would also a soft fork on top of that.

While I think this idea isn't bad in and of itself, there is an
assumption being made that the community would come to consensus
regarding a future soft fork. This, IMO, is a dangerous assumption.
Failure would potentially leave the network at a hard fork well past any
current proposal. It would also potentially lead to miners becoming
hostile players and making political demands. ("Soft fork down to X MB
or I'll shut down 15% of the network hashrate and work to shut down more
elsewhere.") I'd hope we can all agree that such a scenario would be
terrible.

I do agree that the idea of giving everybody plenty of time to plan is
critical. (Telecom providers need months, if not years, to plan for even
simple upgrades, which often are not as simple as they look on paper.) I
just think this proposal, while well-meaning, comes across as a bit of a
trojan horse as-is. I can't get behind it, although it could potentially
be molded into something else that's interesting, e.g., Johnson Lau's
Spoonnet. Fork-to-minimum, while introducing its own potential problems,
would put much less pressure on full nodes, and on the ecosphere as a
whole if the max needed to be soft forked down.

(I'd also like to see SegWit go live so that we can get an idea of how
much pressure there really is on the network, thereby giving us a better
idea of how high we can go. I still think we're flying a bit blind in
that regard.)

--
---
Douglas Roark
Cryptocurrency, network security, travel, and art.
https://onename.com/droark
***@vt.edu
PGP key ID: 26623924
Alphonse Pace via bitcoin-dev
2017-03-28 17:53:11 UTC
Permalink
Raw Message
Juan,

I suggest you take a look at this paper:
http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf It may help you form
opinions based in science rather than what appears to be nothing more than
a hunch. It shows that even 4MB is unsafe. SegWit provides up to this
limit.

8MB is most definitely not safe today.

Whether it is unsafe or impossible is the topic, since Wang Chun proposed
making the block size limit 32MiB.


Wang Chun,

Can you specify what meeting you are talking about? You seem to have not
replied on that point. Who were the participants and what was the purpose
of this meeting?

-Alphonse

On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com> wrote:

> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
> 32MB limit valid in next halving, from network, storage and CPU perspective
> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>

>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* bitcoin-dev-***@lists.linuxfoundation.org [mailto:
> bitcoin-dev-***@lists.linuxfoundation.org] *On Behalf Of *Alphonse
> Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <***@gmail.com>; Bitcoin Protocol Discussion <
> bitcoin-***@lists.linuxfoundation.org>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates
> in order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
Juan Garavaglia via bitcoin-dev
2017-03-28 22:36:18 UTC
Permalink
Raw Message
Alphonse,

Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions some of their estimations are not accurate other changed like Bootstrap Time, Cost per Confirmed Transaction they consider a network of 450,000,00 GH and today is 3.594.236.966 GH, the energy consumption per GH is old, the cost of electricity is wrong even when the document was made and is hard to find any parameter used that is valid for an analysis today.

Again with all respect to the experts involved in that analysis is not valid today.

I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.

Again may be is not possible Johnson Lau and LukeJr invested a significant amount of time investigating ways to do a safe HF, and may be not possible to do a safe HF today but from processing power, bandwidth and storage is totally valid and Wang Chung proposal has solid grounds.

Regards

Juan


From: Alphonse Pace [mailto:***@gmail.com]
Sent: Tuesday, March 28, 2017 2:53 PM
To: Juan Garavaglia <***@112bit.com>; Wang Chun <***@gmail.com>
Cc: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting

Juan,

I suggest you take a look at this paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf It may help you form opinions based in science rather than what appears to be nothing more than a hunch. It shows that even 4MB is unsafe. SegWit provides up to this limit.

8MB is most definitely not safe today.

Whether it is unsafe or impossible is the topic, since Wang Chun proposed making the block size limit 32MiB.


Wang Chun,

Can you specify what meeting you are talking about? You seem to have not replied on that point. Who were the participants and what was the purpose of this meeting?

-Alphonse

On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com<mailto:***@112bit.com>> wrote:
Alphonse,

In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.

If is unsafe or impossible to raise the blocksize is a different topic.

Regards

Juan


From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> [mailto:bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>] On Behalf Of Alphonse Pace via bitcoin-dev
Sent: Tuesday, March 28, 2017 2:24 PM
To: Wang Chun <***@gmail.com<mailto:***@gmail.com>>; Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting

What meeting are you referring to? Who were the participants?

Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.

32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.

It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.

-Alphonse



On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.

The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.

Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.

With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.

We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.

Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Luv Khemani via bitcoin-dev
2017-03-29 02:59:58 UTC
Permalink
Raw Message
Hi Juan


> I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.



Protocol development, especially one in control of people's money cannot be based on beliefs. Do you have actual data to show significant increases in desktop CPU, memory and bandwidth?


All empirical evidence points to the opposite.

Intel has been struggling to eek out 5-10% gains for each generation of its CPUs. The growth of the total blockchain size at 1MB alone is much faster than this.

CPU Core counts have also been stagnant for a decade.

Disk Space growth has also been slowing and with the trend towards SSDs, available disk space in a typical PC has turned negative sharply.


Regards

Luv


________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org <bitcoin-dev-***@lists.linuxfoundation.org> on behalf of Juan Garavaglia via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org>
Sent: Wednesday, March 29, 2017 6:36 AM
To: Alphonse Pace; Wang Chun
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting


Alphonse,



Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions some of their estimations are not accurate other changed like Bootstrap Time, Cost per Confirmed Transaction they consider a network of 450,000,00 GH and today is 3.594.236.966 GH, the energy consumption per GH is old, the cost of electricity is wrong even when the document was made and is hard to find any parameter used that is valid for an analysis today.



Again with all respect to the experts involved in that analysis is not valid today.



I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.



Again may be is not possible Johnson Lau and LukeJr invested a significant amount of time investigating ways to do a safe HF, and may be not possible to do a safe HF today but from processing power, bandwidth and storage is totally valid and Wang Chung proposal has solid grounds.



Regards



Juan





From: Alphonse Pace [mailto:***@gmail.com]
Sent: Tuesday, March 28, 2017 2:53 PM
To: Juan Garavaglia <***@112bit.com>; Wang Chun <***@gmail.com>
Cc: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting



Juan,



I suggest you take a look at this paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf It may help you form opinions based in science rather than what appears to be nothing more than a hunch. It shows that even 4MB is unsafe. SegWit provides up to this limit.

On Scaling Decentralized Blockchains<http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf>
fc16.ifca.ai
On Scaling Decentralized Blockchains (A Position Paper) Kyle Croman 0 ;1, Christian Decker 4, Ittay Eyal , Adem Efe Gencer , Ari Juels 0 ;2, Ahmed Kosba 0 ;3, Andrew ...





8MB is most definitely not safe today.



Whether it is unsafe or impossible is the topic, since Wang Chun proposed making the block size limit 32MiB.





Wang Chun,

Can you specify what meeting you are talking about? You seem to have not replied on that point. Who were the participants and what was the purpose of this meeting?



-Alphonse



On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com<mailto:***@112bit.com>> wrote:

Alphonse,



In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.



If is unsafe or impossible to raise the blocksize is a different topic.



Regards



Juan





From: bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org> [mailto:bitcoin-dev-***@lists.linuxfoundation.org<mailto:bitcoin-dev-***@lists.linuxfoundation.org>] On Behalf Of Alphonse Pace via bitcoin-dev
Sent: Tuesday, March 28, 2017 2:24 PM
To: Wang Chun <***@gmail.com<mailto:***@gmail.com>>; Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting



What meeting are you referring to? Who were the participants?



Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.



32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.



It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.



-Alphonse







On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:

I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.

The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.

Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.

With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.

We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.

Anyway, we must code something right now, before it becomes too late.
Emin Gün Sirer via bitcoin-dev
2017-03-29 06:24:05 UTC
Permalink
Raw Message
>Even when several of the experts involved in the document you refer has my
respect and admiration, I do not agree with some of their conclusions

I'm one of the co-authors of that study. I'd be the first to agree with
your conclusion
and argue that the 4MB size suggested in that paper should not be used
without
compensation for two important changes to the network.

Our recent measurements of the Bitcoin P2P network show that network speeds
have improved tremendously. From February 2016 to February 2017, the average
provisioned bandwidth of a reachable Bitcoin node went up by approximately
70%.
And that's just in the last year.

Further, the emergence of high-speed block relay networks, like Falcon (
http://www.falcon-net.org)
and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the
picture dramatically.

So, the 4MB limit mentioned in our paper should not be used as a protocol
limit today.

Best,
- egs



On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> Alphonse,
>
>
>
> Even when several of the experts involved in the document you refer has my
> respect and admiration, I do not agree with some of their conclusions some
> of their estimations are not accurate other changed like Bootstrap Time,
> Cost per Confirmed Transaction they consider a network of 450,000,00 GH and
> today is 3.594.236.966 GH, the energy consumption per GH is old, the cost
> of electricity is wrong even when the document was made and is hard to find
> any parameter used that is valid for an analysis today.
>
>
>
> Again with all respect to the experts involved in that analysis is not
> valid today.
>
>
>
> I tend to believe more in Moore’s law, Butters' Law of Photonics and
> Kryder’s Law all has been verified for many years and support that 32 MB in
> 2020 are possible and equals or less than 1 MB in 2010.
>
>
>
> Again may be is not possible Johnson Lau and LukeJr invested a significant
> amount of time investigating ways to do a safe HF, and may be not possible
> to do a safe HF today but from processing power, bandwidth and storage is
> totally valid and Wang Chung proposal has solid grounds.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* Alphonse Pace [mailto:***@gmail.com]
> *Sent:* Tuesday, March 28, 2017 2:53 PM
> *To:* Juan Garavaglia <***@112bit.com>; Wang Chun <***@gmail.com>
> *Cc:* Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> Juan,
>
>
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/
> bitcoin/papers/CDE+16.pdf It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch. It shows that
> even 4MB is unsafe. SegWit provides up to this limit.
>
>
>
> 8MB is most definitely not safe today.
>
>
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
>
>
>
> Wang Chun,
>
>
> Can you specify what meeting you are talking about? You seem to have not
> replied on that point. Who were the participants and what was the purpose
> of this meeting?
>
>
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
> 32MB limit valid in next halving, from network, storage and CPU perspective
> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* bitcoin-dev-***@lists.linuxfoundation.org [mailto:
> bitcoin-dev-***@lists.linuxfoundation.org] *On Behalf Of *Alphonse
> Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <***@gmail.com>; Bitcoin Protocol Discussion <
> bitcoin-***@lists.linuxfoundation.org>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates
> in order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Johnson Lau via bitcoin-dev
2017-03-29 15:34:41 UTC
Permalink
Raw Message
> On 29 Mar 2017, at 14:24, Emin GÃŒn Sirer via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:
>
> >Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions
>
> I'm one of the co-authors of that study. I'd be the first to agree with your conclusion
> and argue that the 4MB size suggested in that paper should not be used without
> compensation for two important changes to the network.
>
> Our recent measurements of the Bitcoin P2P network show that network speeds
> have improved tremendously. From February 2016 to February 2017, the average
> provisioned bandwidth of a reachable Bitcoin node went up by approximately 70%.
> And that's just in the last year.

4 * 144 * 30 = 17.3GB per month, or 207GB per year. Full node initialisation will become prohibitive for most users until a shortcut is made (e.g. witness pruning and UTXO commitment but these are not trust-free)

>
> Further, the emergence of high-speed block relay networks, like Falcon (http://www.falcon-net.org <http://www.falcon-net.org/>)
> and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the picture dramatically.

Also as the co-author of the selfish mining paper, you should know all these technology assume big miners being benevolent.

>
> So, the 4MB limit mentioned in our paper should not be used as a protocol limit today.
>
> Best,
> - egs
>
>
>
> On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
> Alphonse,
>
>
>
> Even when several of the experts involved in the document you refer has my respect and admiration, I do not agree with some of their conclusions some of their estimations are not accurate other changed like Bootstrap Time, Cost per Confirmed Transaction they consider a network of 450,000,00 GH and today is 3.594.236.966 GH, the energy consumption per GH is old, the cost of electricity is wrong even when the document was made and is hard to find any parameter used that is valid for an analysis today.
>
>
>
> Again with all respect to the experts involved in that analysis is not valid today.
>
>
>
> I tend to believe more in Moore’s law, Butters' Law of Photonics and Kryder’s Law all has been verified for many years and support that 32 MB in 2020 are possible and equals or less than 1 MB in 2010.
>
>
>
> Again may be is not possible Johnson Lau and LukeJr invested a significant amount of time investigating ways to do a safe HF, and may be not possible to do a safe HF today but from processing power, bandwidth and storage is totally valid and Wang Chung proposal has solid grounds.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> From: Alphonse Pace [mailto:***@gmail.com <mailto:***@gmail.com>]
> Sent: Tuesday, March 28, 2017 2:53 PM
> To: Juan Garavaglia <***@112bit.com <mailto:***@112bit.com>>; Wang Chun <***@gmail.com <mailto:***@gmail.com>>
> Cc: Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>>
>
>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> Juan,
>
>
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf <http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf> It may help you form opinions based in science rather than what appears to be nothing more than a hunch. It shows that even 4MB is unsafe. SegWit provides up to this limit.
>
>
>
> 8MB is most definitely not safe today.
>
>
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed making the block size limit 32MiB.
>
>
>
>
>
> Wang Chun,
>
>
> Can you specify what meeting you are talking about? You seem to have not replied on that point. Who were the participants and what was the purpose of this meeting?
>
>
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com <mailto:***@112bit.com>> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and 32MB limit valid in next halving, from network, storage and CPU perspective or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> From: bitcoin-dev-***@lists.linuxfoundation.org <mailto:bitcoin-dev-***@lists.linuxfoundation.org> [mailto:bitcoin-dev-***@lists.linuxfoundation.org <mailto:bitcoin-dev-***@lists.linuxfoundation.org>] On Behalf Of Alphonse Pace via bitcoin-dev
> Sent: Tuesday, March 28, 2017 2:24 PM
> To: Wang Chun <***@gmail.com <mailto:***@gmail.com>>; Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true 32MiB limit, but a limit of whatever transport methods provide. This can lead to differing consensus if alternative layers for relaying are used. What you seem to be asking for is an unbound block size (or at least determined by whatever miners produce). This has the possibility (and even likelihood) of removing many participants from the network, including many small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety which are known to exist far sooner, and we cannot expect hardware and networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates in order to truly measure the effects on the network from this increased capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Leandro Coutinho via bitcoin-dev
2017-04-01 16:15:32 UTC
Permalink
Raw Message
One interesting thing to do is to compare how much does it cost to maintain
a bank check account and how much does it cost to run a full node.

It seems that it is about 120USD/year in USA:
http://m.huffpost.com/us/entry/6219730

A 4TB hard drive ~=115USD
https://www.amazon.com/gp/aw/d/B01LQQH86A/ref=mp_s_a_1_4

And it has a warranty of 3 years.

As your calculation shows, it will take more than 19 years to reach 4TB
with a 4MB blocksize.

Em 29/03/2017 12:35, "Johnson Lau via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org> escreveu:


On 29 Mar 2017, at 14:24, Emin GÃŒn Sirer via bitcoin-dev <bitcoin-***@lists.
linuxfoundation.org> wrote:

>Even when several of the experts involved in the document you refer has my
respect and admiration, I do not agree with some of their conclusions

I'm one of the co-authors of that study. I'd be the first to agree with
your conclusion
and argue that the 4MB size suggested in that paper should not be used
without
compensation for two important changes to the network.


Our recent measurements of the Bitcoin P2P network show that network speeds
have improved tremendously. From February 2016 to February 2017, the average
provisioned bandwidth of a reachable Bitcoin node went up by approximately
70%.
And that's just in the last year.


4 * 144 * 30 = 17.3GB per month, or 207GB per year. Full node
initialisation will become prohibitive for most users until a shortcut is
made (e.g. witness pruning and UTXO commitment but these are not trust-free)


Further, the emergence of high-speed block relay networks, like Falcon (
http://www.falcon-net.org)
and FIBRE, as well as block compression, e.g. BIP152 and xthin, change the
picture dramatically.


Also as the co-author of the selfish mining paper, you should know all
these technology assume big miners being benevolent.


So, the 4MB limit mentioned in our paper should not be used as a protocol
limit today.

Best,
- egs



On Tue, Mar 28, 2017 at 3:36 PM, Juan Garavaglia via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> Alphonse,
>
>
>
> Even when several of the experts involved in the document you refer has my
> respect and admiration, I do not agree with some of their conclusions some
> of their estimations are not accurate other changed like Bootstrap Time,
> Cost per Confirmed Transaction they consider a network of 450,000,00 GH and
> today is 3.594.236.966 GH, the energy consumption per GH is old, the cost
> of electricity is wrong even when the document was made and is hard to find
> any parameter used that is valid for an analysis today.
>
>
>
> Again with all respect to the experts involved in that analysis is not
> valid today.
>
>
>
> I tend to believe more in Moore’s law, Butters' Law of Photonics and
> Kryder’s Law all has been verified for many years and support that 32 MB in
> 2020 are possible and equals or less than 1 MB in 2010.
>
>
>
> Again may be is not possible Johnson Lau and LukeJr invested a significant
> amount of time investigating ways to do a safe HF, and may be not possible
> to do a safe HF today but from processing power, bandwidth and storage is
> totally valid and Wang Chung proposal has solid grounds.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* Alphonse Pace [mailto:***@gmail.com]
> *Sent:* Tuesday, March 28, 2017 2:53 PM
> *To:* Juan Garavaglia <***@112bit.com>; Wang Chun <***@gmail.com>
> *Cc:* Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>
>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> Juan,
>
>
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/bit
> coin/papers/CDE+16.pdf It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch. It shows that
> even 4MB is unsafe. SegWit provides up to this limit.
>
>
>
> 8MB is most definitely not safe today.
>
>
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
>
>
>
> Wang Chun,
>
>
> Can you specify what meeting you are talking about? You seem to have not
> replied on that point. Who were the participants and what was the purpose
> of this meeting?
>
>
>
> -Alphonse
>
>
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
> 32MB limit valid in next halving, from network, storage and CPU perspective
> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:* bitcoin-dev-***@lists.linuxfoundation.org [mailto:
> bitcoin-dev-***@lists.linuxfoundation.org] *On Behalf Of *Alphonse
> Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <***@gmail.com>; Bitcoin Protocol Discussion <
> bitcoin-***@lists.linuxfoundation.org>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not really a true
> 32MiB limit, but a limit of whatever transport methods provide. This can
> lead to differing consensus if alternative layers for relaying are used.
> What you seem to be asking for is an unbound block size (or at least
> determined by whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network, including many
> small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits of safety
> which are known to exist far sooner, and we cannot expect hardware and
> networking layers to improve by those amounts in that time.
>
>
>
> It also seems like it would be much better to wait until SegWit activates
> in order to truly measure the effects on the network from this increased
> capacity before committing to any additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Jared Lee Richardson via bitcoin-dev
2017-03-29 09:16:43 UTC
Permalink
Raw Message
> I suggest you take a look at this paper: http://fc16.ifca.ai/
bitcoin/papers/CDE+16.pdf It may help you form opinions based in science
rather than what appears to be nothing more than a hunch. It shows that
even 4MB is unsafe. SegWit provides up to this limit.

I find this paper wholly unconvincing. Firstly I note that he assumes the
price of electricity is 10c/kwh in Oct 2015. As a miner operating and
building large farms at that time, I can guarantee you that almost no large
mines were paying anything even close to that high for electricity, even
then. If he had performed a detailed search on the big mines he would have
found as much, or could have asked, but it seems like it was simply made
up. Even U.S. industrial electricity prices are lower than that.

Moreover, he focuses his math almost entirely around mining, asserting in
table 1 that 98% of the "cost of processing a transaction" as being
mining. That completely misunderstands the purpose of mining. Miners
occasionally trivially resolve double spend conflicts, but miners are
paid(and played against eachother) for economic security against
attackers. They aren't paid to process transactions. Nodes process
transactions and are paid nothing to do so, and their costs are 100x more
relevant to the blocksize debate than a paper about miner costs. Miner's
operational costs relate to economic protection formulas, not the cost of a
transaction.

He also states: "the top 10% of nodes receive a 1MB block 2.4min earlier
than the bottom 10% — meaning that depending on their access to nodes, some
miners could obtain a significant and unfair lead over others in solving
hash puzzles."

He's using 2012-era logic of mining. By October 2015, no miner of any size
was in the bottom 10% of node propagation. If they were a small or medium
sized miner, they mined shares on a pool and would be at most 30 seconds
behind the pool. Pools that didn't get blocks within 20 seconds weren't
pools for long. If they were a huge miner, they ran their own pool with
good propagation times. For a scientific paper, this is reading like
someone who had absolutely no idea what was really going on in the mining
world at the time. But again, none of that relates to transaction "costs."
Transactions cost nodes money; protecting the network costs miners money.
Miners are rewarded with fees; nodes are rewarded only by utility and price
increases.

On Tue, Mar 28, 2017 at 10:53 AM, Alphonse Pace via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> Juan,
>
> I suggest you take a look at this paper: http://fc16.ifca.ai/
> bitcoin/papers/CDE+16.pdf It may help you form opinions based in science
> rather than what appears to be nothing more than a hunch. It shows that
> even 4MB is unsafe. SegWit provides up to this limit.
>
> 8MB is most definitely not safe today.
>
> Whether it is unsafe or impossible is the topic, since Wang Chun proposed
> making the block size limit 32MiB.
>
>
> Wang Chun,
>
> Can you specify what meeting you are talking about? You seem to have not
> replied on that point. Who were the participants and what was the purpose
> of this meeting?
>
> -Alphonse
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com> wrote:
>
>> Alphonse,
>>
>>
>>
>> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on 2016 and
>> 32MB limit valid in next halving, from network, storage and CPU perspective
>> or 1MB was too high in 2010 what is possible or 1MB is to low today.
>>
>>
>>
>> If is unsafe or impossible to raise the blocksize is a different topic.
>>
>
>>
>> Regards
>>
>>
>>
>> Juan
>>
>>
>>
>>
>>
>> *From:* bitcoin-dev-***@lists.linuxfoundation.org [mailto:
>> bitcoin-dev-***@lists.linuxfoundation.org] *On Behalf Of *Alphonse
>> Pace via bitcoin-dev
>> *Sent:* Tuesday, March 28, 2017 2:24 PM
>> *To:* Wang Chun <***@gmail.com>; Bitcoin Protocol Discussion <
>> bitcoin-***@lists.linuxfoundation.org>
>> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last week's meeting
>>
>>
>>
>> What meeting are you referring to? Who were the participants?
>>
>>
>>
>> Removing the limit but relying on the p2p protocol is not really a true
>> 32MiB limit, but a limit of whatever transport methods provide. This can
>> lead to differing consensus if alternative layers for relaying are used.
>> What you seem to be asking for is an unbound block size (or at least
>> determined by whatever miners produce). This has the possibility (and even
>> likelihood) of removing many participants from the network, including many
>> small miners.
>>
>>
>>
>> 32MB in less than 3 years also appears to be far beyond limits of safety
>> which are known to exist far sooner, and we cannot expect hardware and
>> networking layers to improve by those amounts in that time.
>>
>>
>>
>> It also seems like it would be much better to wait until SegWit activates
>> in order to truly measure the effects on the network from this increased
>> capacity before committing to any additional increases.
>>
>>
>>
>> -Alphonse
>>
>>
>>
>>
>>
>>
>>
>> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev <
>> bitcoin-***@lists.linuxfoundation.org> wrote:
>>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-***@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>>
>>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Aymeric Vitte via bitcoin-dev
2017-03-29 16:00:25 UTC
Permalink
Raw Message
Le 29/03/2017 à 11:16, Jared Lee Richardson via bitcoin-dev a écrit :
> Nodes process transactions and are paid nothing to do so, and their
> costs are 100x more relevant to the blocksize debate than a paper
> about miner costs.
>
> Miners are rewarded with fees; nodes are rewarded only by utility and
> price increases.

Nodes are rewarded by just nothing which is the main problem of the
bitcoin network (who is therefore not a decentralized system today)
although it seems like everybody is eluding the issue (as well as how to
find solutions to setup quickly full nodes as you quoted in another
answer to this thread, and of course design a decentralized system to
make sure that full nodes behave correctly)

Bitcoin would not be in this situation (ie maybe at the mercy of a very
small minority of freeriders among all the entities involved in the
network, ie miners, just seeking to make more and more money because
they invested in an anti-ecological pow, not understanding that bitcoin
is not just about money) if more nodes were existing and could reject
their blocks

It seems like the initial message of this thread(t) is an ultimatum:
whether you implement what we ask, whether we join BU and then > 50 is
almost reached...


>
> On Tue, Mar 28, 2017 at 10:53 AM, Alphonse Pace via bitcoin-dev
> <bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
>
> Juan,
>
> I suggest you take a look at this
> paper: http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf
> <http://fc16.ifca.ai/bitcoin/papers/CDE+16.pdf> It may help you
> form opinions based in science rather than what appears to be
> nothing more than a hunch. It shows that even 4MB is unsafe.
> SegWit provides up to this limit.
>
> 8MB is most definitely not safe today.
>
> Whether it is unsafe or impossible is the topic, since Wang Chun
> proposed making the block size limit 32MiB.
>
>
> Wang Chun,
>
> Can you specify what meeting you are talking about? You seem to
> have not replied on that point. Who were the participants and
> what was the purpose of this meeting?
>
> -Alphonse
>
> On Tue, Mar 28, 2017 at 12:33 PM, Juan Garavaglia <***@112bit.com
> <mailto:***@112bit.com>> wrote:
>
> Alphonse,
>
>
>
> In my opinion if 1MB limit was ok in 2010, 8MB limit is ok on
> 2016 and 32MB limit valid in next halving, from network,
> storage and CPU perspective or 1MB was too high in 2010 what
> is possible or 1MB is to low today.
>
>
>
> If is unsafe or impossible to raise the blocksize is a
> different topic.
>
>
>
> Regards
>
>
>
> Juan
>
>
>
>
>
> *From:*bitcoin-dev-***@lists.linuxfoundation.org
> <mailto:bitcoin-dev-***@lists.linuxfoundation.org>
> [mailto:bitcoin-dev-***@lists.linuxfoundation.org
> <mailto:bitcoin-dev-***@lists.linuxfoundation.org>] *On
> Behalf Of *Alphonse Pace via bitcoin-dev
> *Sent:* Tuesday, March 28, 2017 2:24 PM
> *To:* Wang Chun <***@gmail.com
> <mailto:***@gmail.com>>; Bitcoin Protocol Discussion
> <bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>>
> *Subject:* Re: [bitcoin-dev] Hard fork proposal from last
> week's meeting
>
>
>
> What meeting are you referring to? Who were the participants?
>
>
>
> Removing the limit but relying on the p2p protocol is not
> really a true 32MiB limit, but a limit of whatever transport
> methods provide. This can lead to differing consensus if
> alternative layers for relaying are used. What you seem to be
> asking for is an unbound block size (or at least determined by
> whatever miners produce). This has the possibility (and even
> likelihood) of removing many participants from the network,
> including many small miners.
>
>
>
> 32MB in less than 3 years also appears to be far beyond limits
> of safety which are known to exist far sooner, and we cannot
> expect hardware and networking layers to improve by those
> amounts in that time.
>
>
>
> It also seems like it would be much better to wait until
> SegWit activates in order to truly measure the effects on the
> network from this increased capacity before committing to any
> additional increases.
>
>
>
> -Alphonse
>
>
>
>
>
>
>
> On Tue, Mar 28, 2017 at 11:59 AM, Wang Chun via bitcoin-dev
> <bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
>
> I've proposed this hard fork approach last year in Hong
> Kong Consensus
> but immediately rejected by coredevs at that meeting,
> after more than
> one year it seems that lots of people haven't heard of it.
> So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky
> and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is
> approaching its
> limit, and we must think ahead. Shall we code a patch
> right now, to
> remove the block size limit of 1MB, but not activate it
> until far in
> the future. I would propose to remove the 1MB limit at the
> next block
> halving in spring 2020, only limit the block size to 32MiB
> which is
> the maximum size the current p2p protocol allows. This
> patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just
> as before,
> no fork will ever occur, until spring 2020. But everyone
> knows there
> will be a fork scheduled. Third party services, libraries,
> wallets and
> exchanges will have enough time to prepare for it over the
> next three
> years.
>
> We don't yet have an agreement on how to increase the
> block size
> limit. There have been many proposals over the past years,
> like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248,
> BU, and so
> on. These hard fork proposals, with this patch already in
> Core's
> release, they all become soft fork. We'll have enough time
> to discuss
> all these proposals and decide which one to go. Take an
> example, if we
> choose to fork to only 2MB, since 32MiB already scheduled,
> reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it
> becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Johnson Lau via bitcoin-dev
2017-03-28 17:34:23 UTC
Permalink
Raw Message
You are probably not the first one nor last one with such idea. Actually, Luke wrote up a BIP with similar idea in mind:

https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki <https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki>

Instead of just lifting the block size limit, he also suggested to remove many other rules. I think he has given up this idea because it’s just too complicated.

If we really want to prepare for a hardfork, we probably want to do more than simply increasing the size limit. For example, my spoonnet proposal:

https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html <https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013542.html>

In a HF, we may want to relocate the witness commitment to a better place. We may also want to fix Satoshi's sighash bug. These are much more than simple size increase.

So if we really want to get prepared for a potential HF with unknown parameters, I’d suggest to set a time bomb in the client, which will stop processing of transactions with big warning in GUI. The user may still have an option to continue with old rules at their own risks.

Or, instead of increasing the block size, we make a softfork to decrease the block size to 1kB and block reward to 0, activating far in the future. This is similar to the difficulty bomb in ETH, which will freeze the network.

> On 29 Mar 2017, at 00:59, Wang Chun via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:
>
> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Luke Dashjr via bitcoin-dev
2017-03-28 17:46:20 UTC
Permalink
Raw Message
On Tuesday, March 28, 2017 5:34:23 PM Johnson Lau via bitcoin-dev wrote:
> You are probably not the first one nor last one with such idea. Actually,
> Luke wrote up a BIP with similar idea in mind:
>
> https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki
> <https://github.com/luke-jr/bips/blob/bip-hfprep/bip-hfprep.mediawiki>
>
> Instead of just lifting the block size limit, he also suggested to remove
> many other rules. I think he has given up this idea because it’s just too
> complicated.
> ...
> So if we really want to get prepared for a potential HF with unknown
> parameters, I’d suggest to set a time bomb in the client, which will stop
> processing of transactions with big warning in GUI. The user may still
> have an option to continue with old rules at their own risks.

Indeed, actually implementing hfprep proved to be overly complicated.

I like the idea of a time bomb that just shuts down the client after it
determine it's stale and refuses to start without an explicit override.
That should work no matter what the hardfork is, and gives us a good
expectation for hardfork timeframes.

> Or, instead of increasing the block size, we make a softfork to decrease
> the block size to 1kB and block reward to 0, activating far in the future.
> This is similar to the difficulty bomb in ETH, which will freeze the
> network.

I don't like this idea. It leaves the node open to attack from blocks actually
meeting the criteria. Maybe the absolute minimum as Jeremy suggested.

Luke
Tom Zander via bitcoin-dev
2017-03-28 20:50:58 UTC
Permalink
Raw Message
On Tuesday, 28 March 2017 19:34:23 CEST Johnson Lau via bitcoin-dev wrote:
> So if we really want to get prepared for a potential HF with unknown
> parameters,

That was not suggested.

Maybe you can comment on the very specific suggestion instead?

--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
Johnson Lau via bitcoin-dev
2017-03-29 04:21:33 UTC
Permalink
Raw Message
> On 29 Mar 2017, at 04:50, Tom Zander <***@freedommail.ch <mailto:***@freedommail.ch>> wrote:
>
> On Tuesday, 28 March 2017 19:34:23 CEST Johnson Lau via bitcoin-dev wrote:
>> So if we really want to get prepared for a potential HF with unknown
>> parameters,
>
> That was not suggested.
>
> Maybe you can comment on the very specific suggestion instead?
>
> --
> Tom Zander
> Blog: https://zander.github.io <https://zander.github.io/>
> Vlog: https://vimeo.com/channels/tomscryptochannel <https://vimeo.com/channels/tomscryptochannel>

Just take something like FlexTran as example. How you could get prepared for that without first finalising the spec?

Or changing the block interval from 10 minutes to some other value?

Also, fixing the sighash bug for legacy scripts?

There are many other ideas that require a HF:
https://en.bitcoin.it/wiki/User:Gmaxwell/alt_ideas <https://en.bitcoin.it/wiki/User:Gmaxwell/alt_ideas>
Paul Iverson via bitcoin-dev
2017-03-28 19:56:49 UTC
Permalink
Raw Message
Thank you for the proposal Wang Chung!

It is clear that, spam aside, blocks are getting full and we need increase
them soon. What I don't like about your proposal is it forces all node
operators to implicitly accept larger blocks in 2020, even maybe against
their will. 32 MB blocks might result in a loss of decentralization, and it
might be too difficult to coordinate for small blocks before it's too late.


So I think Core can't decide on hard forks like this. It must be left up to
the users. I think only choice is for Core to add a run-time option to
allow node operators to increase block size limit, so that this very
controversial decision is not coming from Core. It must come from the
community.
Pieter Wuille via bitcoin-dev
2017-03-28 20:16:03 UTC
Permalink
Raw Message
On Tue, Mar 28, 2017 at 12:56 PM, Paul Iverson via bitcoin-dev
<bitcoin-***@lists.linuxfoundation.org> wrote:
> So I think Core can't decide on hard forks like this. It must be left up to
> the users. I think only choice is for Core to add a run-time option to allow
> node operators to increase block size limit, so that this very controversial
> decision is not coming from Core. It must come from the community.

Bitcoin Core's (nor any other software's) maintainers can already not
decide on a hard fork, and I keep being confused by the focus on Core
in this topic. Even if a hard forking change (or lack thereof) was
included into a new release, it is still up to the community to choose
to run the new software. Bitcoin Core has very intentionally no
auto-update feature, as the choice for what network rules to implement
must come from node operators, not developers. Ask yourself this: if a
new Bitcoin Core release would include a new rule that blacklists
<random famous person>'s coins. What do you think would happen? I hope
that people would refuse to update, and choose to run different full
node software.

Core is not special. It is one of many pieces of software that
implement today's Bitcoin consensus rules. If a hardfork is to take
place in a way that does not result in two currencies, it must be
clear that the entire ecosystem will adopt it. Bitcoin Core will not
merge any consensus changes that do not clearly satisfy that
criterion.

--
Pieter
Tom Zander via bitcoin-dev
2017-03-28 20:43:34 UTC
Permalink
Raw Message
On Tuesday, 28 March 2017 21:56:49 CEST Paul Iverson via bitcoin-dev wrote:
> It is clear that, spam aside, blocks are getting full and we need increase
> them soon. What I don't like about your proposal is it forces all node
> operators to implicitly accept larger blocks in 2020, even maybe against
> their will. 32 MB blocks might result in a loss of decentralization, and
> it might be too difficult to coordinate for small blocks before it's too
> late.

The suggestion was not to produce 32MB blocks, so your fear here is
unfounded.

--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
Alphonse Pace via bitcoin-dev
2017-03-28 20:53:30 UTC
Permalink
Raw Message
His demand (not suggestion) allows it without any safeguards.

>This patch must be in the immediate next release of Bitcoin Core.

That is not a suggestion.

Wang - still waiting on the details of this meeting. In the spirit of
openness, I think you ought to share with the community what kind of secret
meetings are happening.


On Tue, Mar 28, 2017 at 3:43 PM, Tom Zander via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> On Tuesday, 28 March 2017 21:56:49 CEST Paul Iverson via bitcoin-dev wrote:
> > It is clear that, spam aside, blocks are getting full and we need
> increase
> > them soon. What I don't like about your proposal is it forces all node
> > operators to implicitly accept larger blocks in 2020, even maybe against
> > their will. 32 MB blocks might result in a loss of decentralization, and
> > it might be too difficult to coordinate for small blocks before it's too
> > late.
>
> The suggestion was not to produce 32MB blocks, so your fear here is
> unfounded.
>
> --
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Luke Dashjr via bitcoin-dev
2017-03-28 21:06:49 UTC
Permalink
Raw Message
On Tuesday, March 28, 2017 8:53:30 PM Alphonse Pace via bitcoin-dev wrote:
> His demand (not suggestion) allows it without any safeguards.
>
> >This patch must be in the immediate next release of Bitcoin Core.
>
> That is not a suggestion.

I think it was probably a design requirement more than a demand. It makes
sense: if we're aiming to have a long lead time for a possible hardfork, we
want to get the lead time started ASAP. (It could perhaps have been
communicated clearer, but let's not read hostility into things when
unnecessary.)

Meta-topic: Can we try a little harder to avoid sequences of multiple brief
replies in a matter of minutes? Combine them to a single reply.

Luke
Tom Zander via bitcoin-dev
2017-03-28 20:48:44 UTC
Permalink
Raw Message
On Tuesday, 28 March 2017 18:59:32 CEST Wang Chun via bitcoin-dev wrote:
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
...
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork.

I think that is a very smart idea, thank you for making it.
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
Bram Cohen via bitcoin-dev
2017-03-29 06:32:20 UTC
Permalink
Raw Message
On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>

Much as it may be appealing to repeal the block size limit now with a grace
period until a replacement is needed in a repeal and replace strategy, it's
dubious to assume that an idea can be agreed upon later when it can't be
agreed upon now. Trying to put a time limit on it runs into the possibility
that you'll find that whatever reasons there were for not having general
agreement on a new setup before still apply, and running into the
embarrassing situation of winding up sticking with the status quo after
much sturm and drang.
Jorge Timón via bitcoin-dev
2017-03-29 09:37:08 UTC
Permalink
Raw Message
While Segwit's change from 1 mb size limit to 4 mb weight limit seems to be
controversial among some users (I find that very often it is because they
have been confused about what segwit does or even outright lied about it) I
don't think it's very interesting to discuss further size increases.
I find more interesting to talk to the users and see how they think Segwit
harms them, maybe we missed something in segwit that needs to be removed
for segwit to become uncontroversial, or maybe it is just disinformation.

On the other hand, we may want to have our first uncontroversial hardfork
asap, independently of block size. For example, we could do something as
simple as fixing the timewarp attack as bip99 proposes. I cannot think of a
hf that is easier to implement or has less potential for controversy than
that.

On 29 Mar 2017 8:32 am, "Bram Cohen via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org> wrote:

On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>

Much as it may be appealing to repeal the block size limit now with a grace
period until a replacement is needed in a repeal and replace strategy, it's
dubious to assume that an idea can be agreed upon later when it can't be
agreed upon now. Trying to put a time limit on it runs into the possibility
that you'll find that whatever reasons there were for not having general
agreement on a new setup before still apply, and running into the
embarrassing situation of winding up sticking with the status quo after
much sturm and drang.
Jared Lee Richardson via bitcoin-dev
2017-03-29 19:07:15 UTC
Permalink
Raw Message
> While Segwit's change from 1 mb size limit to 4 mb weight limit seems to
be controversial among some users [..] I don't think it's very interesting
to discuss further size increases.

I think the reason for this is largely because SegWit as a blocksize
increase isn't very satisfying. It resolves to a one-time increase with no
future plans, thus engendering the same objections as people who demand we
just "raise the number to N." People can argue about what N should be, but
when N is just a flat number, we know we'll have to deal with the issue
again.

In that light I think it is even more essential to continue to discuss the
blocksize debate and problem.

> I find more interesting to talk to the users and see how they think
Segwit harms them,

From an inordinant amount of time spent reading Reddit, I believe this
largely comes down to the rumor that has a deathgrip on the BU community -
That Core are all just extensions of Blockstream, and blockstream wants to
restrict growth on-chain to force growth of their 2nd layer
services(lightning and/or sidechains).

I believe the tone of the discussion needs to be changed, and have been
trying to work to change that tone for weeks now. There's one faction that
believes that Bitcoin will rarely, if ever, benefit from a blocksize
increase, and fees rising is a desired/unavoidable result. There's a
different faction that believes Bitcoin limits are arbitrary and that all
people worldwide should be able to put any size transactions, even
microtransactions, on-chain. Both factions are extreme in their viewpoints
and resort to conspiracy theories to interpret the actions of
Core(blockstream did it) or BU(Jihan controls everything and anyone who
says overwise is a shill paid by Roger Ver!)

It is all very unhealthy for Bitcoin. Both sides need to accept that
microtransactions from all humans cannot go on-chain, and that never
increasing the blocksize doesn't mean millions of home users will run
nodes. The node argument breaks down economically and the microtransaction
argument is an impossible mountain for a blockchain to climb.


On Wed, Mar 29, 2017 at 2:37 AM, Jorge Timón via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> While Segwit's change from 1 mb size limit to 4 mb weight limit seems to
> be controversial among some users (I find that very often it is because
> they have been confused about what segwit does or even outright lied about
> it) I don't think it's very interesting to discuss further size increases.
> I find more interesting to talk to the users and see how they think Segwit
> harms them, maybe we missed something in segwit that needs to be removed
> for segwit to become uncontroversial, or maybe it is just disinformation.
>
> On the other hand, we may want to have our first uncontroversial hardfork
> asap, independently of block size. For example, we could do something as
> simple as fixing the timewarp attack as bip99 proposes. I cannot think of a
> hf that is easier to implement or has less potential for controversy than
> that.
>
> On 29 Mar 2017 8:32 am, "Bram Cohen via bitcoin-dev" <bitcoin-***@lists.
> linuxfoundation.org> wrote:
>
> On Tue, Mar 28, 2017 at 9:59 AM, Wang Chun via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>
> Much as it may be appealing to repeal the block size limit now with a
> grace period until a replacement is needed in a repeal and replace
> strategy, it's dubious to assume that an idea can be agreed upon later when
> it can't be agreed upon now. Trying to put a time limit on it runs into the
> possibility that you'll find that whatever reasons there were for not
> having general agreement on a new setup before still apply, and running
> into the embarrassing situation of winding up sticking with the status quo
> after much sturm and drang.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Staf Verhaegen via bitcoin-dev
2017-04-02 19:02:02 UTC
Permalink
Raw Message
Jared Lee Richardson via bitcoin-dev schreef op wo 29-03-2017 om 12:07
[-0700]:

>
> It is all very unhealthy for Bitcoin. Both sides need to accept that
> microtransactions from all humans cannot go on-chain, and that never
> increasing the blocksize doesn't mean millions of home users will run
> nodes. The node argument breaks down economically and the
> microtransaction argument is an impossible mountain for a blockchain
> to climb.

What annoys me are people that seem to think that in order to promote
layer two scaling on-chain scaling has to be severely limited. I am
convinced that in order for layer 2 to flourish enough on-chain
bandwidth has to be available, not artificial scarceness.
In order to allow more on-chain bandwidth also sharding solutions should
be investigated so not every transactions has to pass through each node
and without the need of channels but protocol between nodes.

greets,
Staf.
Martin Lízner via bitcoin-dev
2017-03-29 07:49:31 UTC
Permalink
Raw Message
If there should be a hard-fork, Core team should author the code. Other dev
teams have marginal support among all BTC users.

Im tending to believe, that HF is necessary evil now. But lets do it in
conservative approach:
- Fix historical BTC issues, improve code
- Plan HF activation date well ahead - 12 months+
- Allow increasing block size on year-year basis as Luke suggested
- Compromise with miners on initial block size bump (e.g. 2MB)
- SegWit

Martin Lizner

On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> I've proposed this hard fork approach last year in Hong Kong Consensus
> but immediately rejected by coredevs at that meeting, after more than
> one year it seems that lots of people haven't heard of it. So I would
> post this here again for comment.
>
> The basic idea is, as many of us agree, hard fork is risky and should
> be well prepared. We need a long time to deploy it.
>
> Despite spam tx on the network, the block capacity is approaching its
> limit, and we must think ahead. Shall we code a patch right now, to
> remove the block size limit of 1MB, but not activate it until far in
> the future. I would propose to remove the 1MB limit at the next block
> halving in spring 2020, only limit the block size to 32MiB which is
> the maximum size the current p2p protocol allows. This patch must be
> in the immediate next release of Bitcoin Core.
>
> With this patch in core's next release, Bitcoin works just as before,
> no fork will ever occur, until spring 2020. But everyone knows there
> will be a fork scheduled. Third party services, libraries, wallets and
> exchanges will have enough time to prepare for it over the next three
> years.
>
> We don't yet have an agreement on how to increase the block size
> limit. There have been many proposals over the past years, like
> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
> on. These hard fork proposals, with this patch already in Core's
> release, they all become soft fork. We'll have enough time to discuss
> all these proposals and decide which one to go. Take an example, if we
> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
> from 32MiB to 2MB will be a soft fork.
>
> Anyway, we must code something right now, before it becomes too late.
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
David Vorick via bitcoin-dev
2017-03-29 15:57:19 UTC
Permalink
Raw Message
On Mar 29, 2017 9:50 AM, "Martin Lízner via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org> wrote:

Im tending to believe, that HF is necessary evil now.


I will firmly disagree. We know how to do a soft-fork blocksize increase.
If it is decided that a block size increase is justified, we can do it with
extension blocks in a way that achieves full backwards compatibility for
all nodes.

Barring a significant security motivation, there is no need to hardfork.

I am also solidly unconvinced that increasing the blocksize today is a good
move, even as little as SegWit does. It's too expensive for a home user to
run a full node, and user-run full nodes are what provide the strongest
defence against political manuveuring.

When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.

Disk space I believe is the most significant problem today, with RAM being
the second most significant problem, and finally bandwidth consumption as
the third most important consideration. I believe that v0.14 is already too
expensive on all three fronts, and that block size increases shouldn't be
considered at all until the requirements are reduced (or until consumer
hardware is better, but I believe we are talking 3-7 years of waiting if we
pick that option).
Aymeric Vitte via bitcoin-dev
2017-03-29 16:08:42 UTC
Permalink
Raw Message
Le 29/03/2017 à 17:57, David Vorick via bitcoin-dev a écrit :
> It's too expensive for a home user to run a full node, and user-run
> full nodes are what provide the strongest defence against political
> manuveuring.

Yes but what makes you think that "It's too expensive for a home user to
run a full node" ? Not trivial, maybe, long to setup, for sure, but why
"expensive"? I tested running a full node from home that was normally
correctly configured and did not notice anything annoying/expensive

--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
David Vorick via bitcoin-dev
2017-03-29 16:18:26 UTC
Permalink
Raw Message
Perhaps you are fortunate to have a home computer that has more than a
single 512GB SSD. Lots of consumer hardware has that little storage. Throw
on top of it standard consumer usage, and you're often left with less than
200 GB of free space. Bitcoin consumes more than half of that, which feels
very expensive, especially if it motivates you to buy another drive.

I have talked to several people who cite this as the primary reason that
they are reluctant to join the full node club.
David Vorick via bitcoin-dev
2017-03-29 16:25:47 UTC
Permalink
Raw Message
On Mar 29, 2017 12:20 PM, "Andrew Johnson" <***@gmail.com>
wrote:

What's stopping these users from running a pruned node? Not every node
needs to store a complete copy of the blockchain.


Pruned nodes are not the default configuration, if it was the default
configuration then I think you would see far more users running a pruned
node.

But that would also substantially increase the burden on archive nodes.


Further discussion about disk space requirements should be taken to another
thread.
Andrew Johnson via bitcoin-dev
2017-03-29 16:41:29 UTC
Permalink
Raw Message
I believe that as we continue to add users to the system by scaling
capacity that we will see more new nodes appear, but I'm at a bit of a loss
as to how to empirically prove it.

I do see your point on increasing load on archival nodes, but the majority
of that load is going to come from new nodes coming online, they're the
only ones going after very old blocks. I could see that as a potential
attack vector, overwhelm the archival nodes by spinning up new nodes
constantly, therefore making it difficult for a "real" new node to get up
to speed in a reasonable amount of time.

Perhaps the answer there would be a way to pay an archival node a small
amount of bitcoin in order to retrieve blocks older than a certain cutoff?
Include an IP address for the node asking for the data as metadata in the
transaction... Archival nodes could set and publish their own policy, let
the market decide what those older blocks are worth. Would also help to
incentivize running archival node, which we do need. Of course, this isn't
very user friendly.

We can take this to bitcoin-discuss, if we're getting too far off topic.


On Wed, Mar 29, 2017 at 11:25 AM David Vorick <***@gmail.com>
wrote:

>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson" <***@gmail.com>
> wrote:
>
> What's stopping these users from running a pruned node? Not every node
> needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the default
> configuration then I think you would see far more users running a pruned
> node.
>
> But that would also substantially increase the burden on archive nodes.
>
>
> Further discussion about disk space requirements should be taken to
> another thread.
>
>
> --
Andrew Johnson
Aymeric Vitte via bitcoin-dev
2017-03-29 17:14:50 UTC
Permalink
Raw Message
Well it's not going off-topic since the btc folks need now to find a way
to counter the attack

The disk space story is know to be a non issue, because encouraging
people to run nodes while they don't know how to dedicate the right
storage space that is trivial and not expensive to get today is just
stupid, they should not try to run full nodes, and no I tested with non
SSD drives, I was more wondering about cpu and bandwidth use, but did
not notice any impact, just stopped because a repeated sw bug or drive
issue desynched the chain and bitcoin-qt was trying to reload it from
the begining each time, which in my case was taking 10 days despite of
good bandwidth (which would allow me to torrent the entire chain + state
in less than 20 hours), so I stopped after the 3rd crash, setting up a
full node on my servers is still in the todo list (very low priority for
the reasons already explained)

Running a prune node implies first to setup a full node, so the same
problematic applies and then the advantage of pruning is not really
obvious, I don't know what's the strange story about "archival nodes", I
proposed something else

Back to the topic, the conclusion is that this is not difficult at all
for many people to run efficient full nodes, ideally the community
should promote this, seed a torrent with a recent state, implement a
patch to defeat BU plans and have everybody upgrade

But of course this will not happen


Le 29/03/2017 à 18:41, Andrew Johnson a écrit :
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a
> loss as to how to empirically prove it.
>
> I do see your point on increasing load on archival nodes, but the
> majority of that load is going to come from new nodes coming online,
> they're the only ones going after very old blocks. I could see that
> as a potential attack vector, overwhelm the archival nodes by spinning
> up new nodes constantly, therefore making it difficult for a "real"
> new node to get up to speed in a reasonable amount of time.
>
> Perhaps the answer there would be a way to pay an archival node a
> small amount of bitcoin in order to retrieve blocks older than a
> certain cutoff? Include an IP address for the node asking for the
> data as metadata in the transaction... Archival nodes could set and
> publish their own policy, let the market decide what those older
> blocks are worth. Would also help to incentivize running archival
> node, which we do need. Of course, this isn't very user friendly.
>
> We can take this to bitcoin-discuss, if we're getting too far off topic.
>
>
> On Wed, Mar 29, 2017 at 11:25 AM David Vorick <***@gmail.com
> <mailto:***@gmail.com>> wrote:
>
>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson"
> <***@gmail.com <mailto:***@gmail.com>>
> wrote:
>
> What's stopping these users from running a pruned node? Not
> every node needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the
> default configuration then I think you would see far more users
> running a pruned node.
>
> But that would also substantially increase the burden on archive
> nodes.
>
>
> Further discussion about disk space requirements should be taken
> to another thread.
>
>
> --
> Andrew Johnson
>

--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Jared Lee Richardson via bitcoin-dev
2017-03-29 20:53:40 UTC
Permalink
Raw Message
> Pruned nodes are not the default configuration, if it was the default
configuration then I think you would see far more users running a pruned
node.

Default configurations aren't a big enough deal to factor into the critical
discussion of node costs versus transaction fee cost. Default
configurations can be changed, and if nodes are negatively affected by a
default configuration, there will be an abundance of information about how
to correct that effect by turning on pruning. Bitcoin can't design with
the assumption that people can't google - If we wanted to cater to that
population group right now, we'd need 100x the blocksize at least.

> But that would also substantially increase the burden on archive nodes.

This is already a big problem from the measurements I've been looking at.
There are alternatives that need to be considered there as well. If we
limit ourselves to not changing the syncing process for most users, the
blocksize limit debate changes drastically. Hard drive costs, CPU costs,
propagation times... none of those things matter because the cost of sync
bandwidth is so incredibly high even now ($130ish per month, see other
email). Even if we didn't increase the blocksize any more than segwit,
we're already seeing sync costs being shifted onto fewer nodes - I.e., Luke
Jr's scan finding ~50k nodes online but only 7k of those show up on sites
like bitnodes.21.co. Segwit will shift it further until the few nodes
providing sync limit speeds and/or max out on connections, providing no
fully-sync'd nodes for a new node to connect to. Then wallet providers /
node software will offer a solution - A bundled utxo checkpoint that
removes the need to sync. This slightly increases centralization, and
increases centralization more if core were to adopt the same approach.

The advantage would be tremendous for such a simple solution - Node costs
would drop by a full order of magnitude for full nodes even today, more
when archival nodes are more restricted, history is bigger, and segwit
blocksizes are in effect, and then blocksizes could be safely increased by
nearly the same order of magnitude, increasing the utility of bitcoin and
the number of people that can effectively use it.

Another, much more complicated option is for the node sync process to
function like a tor network. A very small number of seed nodes could send
data on to only other nodes with the highest bandwidth available(and good
retention policy, i.e. not tightly pruning as they sync), who then spread
it out further and so on. That's complicated though, because as far as I
know the syncing process today has no ability to exchange a selfish syncing
node for a high performing syncing node. I'm not even sure - will a
syncing node opt to sync from a different node that, itself, isn't fully
sync'd but is farther ahead?

At any rate, syncing bandwidth usage is a critical problem for future
growth and is solvable. The upsides of fixing it are huge, though.

On Wed, Mar 29, 2017 at 9:25 AM, David Vorick via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson" <***@gmail.com>
> wrote:
>
> What's stopping these users from running a pruned node? Not every node
> needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the default
> configuration then I think you would see far more users running a pruned
> node.
>
> But that would also substantially increase the burden on archive nodes.
>
>
> Further discussion about disk space requirements should be taken to
> another thread.
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Jared Lee Richardson via bitcoin-dev
2017-03-29 20:32:05 UTC
Permalink
Raw Message
> Perhaps you are fortunate to have a home computer that has more than a
single 512GB SSD. Lots of consumer hardware has that little storage.

That's very poor logic, sorry. Restricted-space SSD's are not a
cost-effective hardware option for running a node. Keeping blocksizes
small has significant other costs for everyone. Comparing the cost of
running a node under arbitrary conditons A, B, or C when there are far more
efficient options than any of those is a very bad way to think about the
costs of running a node. You basically have to ignore the significant
consequences of keeping blocks small.

If node operational costs rose to the point where an entire wide swath of
users that we do actually need for security purposes could not justify
running a node, that's something important for consideration. For me, that
translates to modern hardware that's relatively well aligned with the needs
of running a node - perhaps budget hardware, but still modern - and
above-average bandwidth caps.

You're free to disagree, but your example only makes sense to me if
blocksize caps didn't have serious consequences. Even if those
consequences are just the threat of a contentious fork by people who are
mislead about the real consequences, that threat is still a consequence
itself.

On Wed, Mar 29, 2017 at 9:18 AM, David Vorick via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> Perhaps you are fortunate to have a home computer that has more than a
> single 512GB SSD. Lots of consumer hardware has that little storage. Throw
> on top of it standard consumer usage, and you're often left with less than
> 200 GB of free space. Bitcoin consumes more than half of that, which feels
> very expensive, especially if it motivates you to buy another drive.
>
> I have talked to several people who cite this as the primary reason that
> they are reluctant to join the full node club.
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
praxeology_guy via bitcoin-dev
2017-03-29 21:36:17 UTC
Permalink
Raw Message
Peter R said: "On that topic, are there any existing proposals detailing a canonical ordering of the UTXO set and a scheme to calculate the root hash?"

I created such here: "A Commitment-suitable UTXO set "Balances" file data structure": https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-March/013692.html

In short it periodically makes snapshots on the state of the UTXO N blocks ago, where N = the snapshot period. UTXOs are ordered by TXID. I've also implemented it in C and tested making them.

gmaxwell says the utxo data format will change and I have other recommended changes to the chainstate database in order to make this more efficient. He pointed me to another similar solution... and suggested this would be done later after SegWit and after the UTXO data format was changed in the chainstate database.

Cheers,
Praxeology Guy
Aymeric Vitte via bitcoin-dev
2017-03-29 22:33:20 UTC
Permalink
Raw Message
I have heard such theory before, it's a complete mistake to think that
others would run full nodes to protect their business and then yours,
unless it is proven that they are decentralized and independent

Running a full node is trivial and not expensive for people who know how
to do it, even with much bigger blocks, assuming that the full nodes are
still decentralized and that they don't have to fight against big nodes
who would attract the traffic first

I have posted many times here a small proposal, that exactly describes
what is going on now, yes miners are nodes too... it's disturbing to see
that despite of Tera bytes of BIPs, papers, etc the current situation is
happening and that all the supposed decentralized system is biased by
centralization

Do we know what majority controls the 6000 full nodes?


Le 29/03/2017 à 22:32, Jared Lee Richardson via bitcoin-dev a écrit :
> > Perhaps you are fortunate to have a home computer that has more than
> a single 512GB SSD. Lots of consumer hardware has that little storage.
>
> That's very poor logic, sorry. Restricted-space SSD's are not a
> cost-effective hardware option for running a node. Keeping blocksizes
> small has significant other costs for everyone. Comparing the cost of
> running a node under arbitrary conditons A, B, or C when there are far
> more efficient options than any of those is a very bad way to think
> about the costs of running a node. You basically have to ignore the
> significant consequences of keeping blocks small.
>
> If node operational costs rose to the point where an entire wide swath
> of users that we do actually need for security purposes could not
> justify running a node, that's something important for consideration.
> For me, that translates to modern hardware that's relatively well
> aligned with the needs of running a node - perhaps budget hardware,
> but still modern - and above-average bandwidth caps.
>
> You're free to disagree, but your example only makes sense to me if
> blocksize caps didn't have serious consequences. Even if those
> consequences are just the threat of a contentious fork by people who
> are mislead about the real consequences, that threat is still a
> consequence itself.
>
> On Wed, Mar 29, 2017 at 9:18 AM, David Vorick via bitcoin-dev
> <bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
>
> Perhaps you are fortunate to have a home computer that has more
> than a single 512GB SSD. Lots of consumer hardware has that little
> storage. Throw on top of it standard consumer usage, and you're
> often left with less than 200 GB of free space. Bitcoin consumes
> more than half of that, which feels very expensive, especially if
> it motivates you to buy another drive.
>
> I have talked to several people who cite this as the primary
> reason that they are reluctant to join the full node club.
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> <mailto:bitcoin-***@lists.linuxfoundation.org>
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev

--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Ryan J Martin via bitcoin-dev
2017-03-30 05:23:31 UTC
Permalink
Raw Message
There is alot going on in this thread so I'll reply more broadly.

The original post and the assorted limit proposals---lead me to something I think is worth reiterating: assuming Bitcoin adoption continues to grow at similar or accelerating rates, then eventually the mempool is going to be filled with thousands of txs at all times whether block limits are 1MB or 16MB. This isn't to say that increasing the limit isn't a worthwhile change, but rather, that if we are going to change the block limit then it should be done with the intent to achieve a fee rate that maximize surplus (and minimize burden) to both users and miners. Even with implementation of a a payment channels system, the pool will likely be faced with having a mountain of txs. Thus the block limit should be optimized in such that social welfare is optimized.
Optimized is likely not keeping the limit at 1MB; this maximizes benefit to miners (producers) while minimizing users' surplus (consumer). 'Unlimited' blocks are purely the reverse; maximizing user surplus while minimizing miners' (with the added bonus of creating blocks that will put technical/hardware strain on the network). So perhaps pursue something in-between that actually optimizes based on a social welfare formula---not just an arbitrary auto-adjusting limit like the other proposals I've seen. Feel free to poke holes in this or e-mail me if curious.

Finally, with respect to getting node counts up, didn't luke-jr or someone come up with an idea of paying nodes a reward by scraping dust and pooling it into a fund of sorts? Was this not possible/feasible? Perhaps at least in the near and medium term something outside of protocol changes could be done to pay a reward to nodes. Even if this is done via voluntary donation system, it may be useful for the purposes of seeing how people respond to incentives and working out an elasticity measure of sorts for running a node.


Ryan J. Martin
***@millersville.edu
(on freenode: tunafizz )

________________________________
From: bitcoin-dev-***@lists.linuxfoundation.org [bitcoin-dev-***@lists.linuxfoundation.org] on behalf of Aymeric Vitte via bitcoin-dev [bitcoin-***@lists.linuxfoundation.org]
Sent: Wednesday, March 29, 2017 6:33 PM
To: Jared Lee Richardson; Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting


I have heard such theory before, it's a complete mistake to think that others would run full nodes to protect their business and then yours, unless it is proven that they are decentralized and independent

Running a full node is trivial and not expensive for people who know how to do it, even with much bigger blocks, assuming that the full nodes are still decentralized and that they don't have to fight against big nodes who would attract the traffic first

I have posted many times here a small proposal, that exactly describes what is going on now, yes miners are nodes too... it's disturbing to see that despite of Tera bytes of BIPs, papers, etc the current situation is happening and that all the supposed decentralized system is biased by centralization

Do we know what majority controls the 6000 full nodes?

Le 29/03/2017 à 22:32, Jared Lee Richardson via bitcoin-dev a écrit :
> Perhaps you are fortunate to have a home computer that has more than a single 512GB SSD. Lots of consumer hardware has that little storage.

That's very poor logic, sorry. Restricted-space SSD's are not a cost-effective hardware option for running a node. Keeping blocksizes small has significant other costs for everyone. Comparing the cost of running a node under arbitrary conditons A, B, or C when there are far more efficient options than any of those is a very bad way to think about the costs of running a node. You basically have to ignore the significant consequences of keeping blocks small.

If node operational costs rose to the point where an entire wide swath of users that we do actually need for security purposes could not justify running a node, that's something important for consideration. For me, that translates to modern hardware that's relatively well aligned with the needs of running a node - perhaps budget hardware, but still modern - and above-average bandwidth caps.

You're free to disagree, but your example only makes sense to me if blocksize caps didn't have serious consequences. Even if those consequences are just the threat of a contentious fork by people who are mislead about the real consequences, that threat is still a consequence itself.

On Wed, Mar 29, 2017 at 9:18 AM, David Vorick via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
Perhaps you are fortunate to have a home computer that has more than a single 512GB SSD. Lots of consumer hardware has that little storage. Throw on top of it standard consumer usage, and you're often left with less than 200 GB of free space. Bitcoin consumes more than half of that, which feels very expensive, especially if it motivates you to buy another drive.

I have talked to several people who cite this as the primary reason that they are reluctant to join the full node club.

_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev





_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Tom Zander via bitcoin-dev
2017-03-30 10:30:49 UTC
Permalink
Raw Message
On Thursday, 30 March 2017 07:23:31 CEST Ryan J Martin via bitcoin-dev
wrote:
> The original post and the assorted limit proposals---lead me to
> something I think is worth reiterating: assuming Bitcoin adoption
> continues to grow at similar or accelerating rates, then eventually the
> mempool is going to be filled with thousands of txs at all times whether
> block limits are 1MB or 16MB

This is hopefully true. :)

There is an unbounded amount of demand for block space, and as such it
doesn’t benefit anyone if the amount of free transactions get out of hand.
Because freeloaders would definitely be able to completely suffocate Bitcoin.

In the mail posted by OP he makes clear that this is a proposal for a hard
fork to change the block size *limit*. The actual block size would not be
changed at the same time, it will continue being set based on market values
or whatever we decide between now and then.

The block size itself should be set based on the amount of fees being paid
to miners to make a block.

What we want is a true fee-market where the miner can decide to make a block
smaller to get people to pay more fees, because if we were to go to 16MB
blocks in one go, the cost of the miner would go up, but his reward based on
fees will go down!
A block so big that 100% of the transactions will always be mined in the
next block will just cause a large section of people to no longer feel the
need to pay fees.

As such I don’t fear the situation where the block size limit goes up a lot
in one go, because it is not in anyone’s interest to make the actual block
size follow.
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
Jared Lee Richardson via bitcoin-dev
2017-03-30 16:44:21 UTC
Permalink
Raw Message
> The block size itself should be set based on the amount of fees being
paid to miners to make a block.

There's a formula to this as well, though going from that to a blocksize
number will be very difficult. Miner fees need to be sufficient to
maintain economic protection against attackers. There is no reason that
miner fees need to be any higher than "sufficient." I believe that
"sufficient" value can be estimated by considering a potential attacker
seeking to profit from short-selling Bitcoin after causing a panic crash.
If they can earn more profit from shorting Bitcoin than it costs to buy,
build/deploy, and perform a 51% attack to shut down the network, then we
are clearly vulnerable. The equation for the profit side of the equation
can be worked out as:

(bitcoin_price * num_coins_shortable * panic_price_drop_percentage)

The equation for the cost side of the equation depends on the total amount
of miner hardware that the network is sustainably paying to operate,
factoring in all costs of the entire bitcoin mining lifecycle(HW cost,
deployment cost, maintenance cost, electricity, amortized facilities cost,
business overheads, orphan losses, etc) except chip design, which the
attacker may be able to take advantage of for free. For convenience I'm
simplifying that complicated cost down to a single number I'm calling
"hardware_lifespan" although the concept is slightly more involved than
that.

(total_miner_payouts * bitcoin_price * hardware_lifespan)

Bitcoin_price is on boths ides of the equation and so can be divided out,
giving:

Unsafe point = (num_coins_shortable * panic_price_drop_percentage) <
(total_miner_payouts
* hardware_lifespan)

Estimating the total number of shortable coins an attacker of nearly
unlimited funds is tricky, especially when things like high leverage levels
or naked short selling may be offered by exchanges. The percent of damage
the resulting panic would cause is also tricky to estimate, but on both
numbers we can make some rough guesses and see how they play out. With
more conservative numbers like say, 2 year hardware lifespan, 10% short,
70% panic drop you get: 1,300k coins profit, 1800 BTC/day in fees minimum
needed to make the attack cost more than it profits.

Using various inputs and erring on the side of caution, I get a minimum
BTC/day fee range of 500-2000. Unfortunately if the blocksize isn't
increased, a relatively small number of transactions/users have to bear the
full cost of the minimum fees, over time increasing the minimum "safe"
average fee paid to 0.008 BTC, 30x the fees people are complaining about
today, and increasing in real-world terms as price increases. All that
said, I believe the costs for node operation are the number that gets hit
first as blocksizes are increased, at least past 2020. I don't think
blocksizes could be increased to such a size that the insufficient-fee
vulnerability would be a bigger concern than high node operational costs.
The main thing I don't have a good grasp on at the moment is any math to
estimate how many nodes we need to protect against the attacks that can
come from having few nodes, or even a clear understanding of what those
attacks are.

> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.

This is also totally true. A system that tried to eliminate the fee
markets would be flawed, and fortunately miners have significant reasons to
oppose such a system.

The reverse is also a problem - If miners as a large group sought to lower
blocksizes to force fee markets higher, that could be a problem. I don't
have solutions for the issue at this time, but something I've turned over
in my mind.

On Thu, Mar 30, 2017 at 3:30 AM, Tom Zander via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> On Thursday, 30 March 2017 07:23:31 CEST Ryan J Martin via bitcoin-dev
> wrote:
> > The original post and the assorted limit proposals---lead me to
> > something I think is worth reiterating: assuming Bitcoin adoption
> > continues to grow at similar or accelerating rates, then eventually the
> > mempool is going to be filled with thousands of txs at all times whether
> > block limits are 1MB or 16MB
>
> This is hopefully true. :)
>
> There is an unbounded amount of demand for block space, and as such it
> doesn’t benefit anyone if the amount of free transactions get out of hand.
> Because freeloaders would definitely be able to completely suffocate
> Bitcoin.
>
> In the mail posted by OP he makes clear that this is a proposal for a hard
> fork to change the block size *limit*. The actual block size would not be
> changed at the same time, it will continue being set based on market values
> or whatever we decide between now and then.
>
> The block size itself should be set based on the amount of fees being paid
> to miners to make a block.
>
> What we want is a true fee-market where the miner can decide to make a
> block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
> on
> fees will go down!
> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.
>
> As such I don’t fear the situation where the block size limit goes up a lot
> in one go, because it is not in anyone’s interest to make the actual block
> size follow.
> --
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Jared Lee Richardson via bitcoin-dev
2017-03-30 20:51:45 UTC
Permalink
Raw Message
> What we want is a true fee-market where the miner can decide to make a
block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
on
> fees will go down!

I agree in concept with everything you've said here, but I think there's a
frequent misconception that there's a certain level of miner payouts that
miners "deserve" and/or the opposite, that miners "deserve" as little as
possible. The 51% attacks that PoW's shields us from are relatively well
defined, which can be used to estimate the minimum amount of sustainable
fees for shielding. Beyond that minimum amount of fees, the best amount of
fees for every non-miner is the lowest.

Unfortunately miners could arbitrarily decide to limit blocksizes, and
there's little except relay restrictions that everyone else could do about
it. Fortunately miners so far have pushed for blocksize increases at least
as much as anyone else, though the future when Bitcoin adoption stabilizes
would be an unknown.

> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.

FYI, I don't see this happening again ever, barring brief exceptions,
unless there was a sudden blocksize change, which ideally we'd avoid ever
happening. The stable average value of the transaction fee determines what
kind of business use-cases can be built using Bitcoin. An average fee of
$0.001 usd enables a lot more use cases than $0.10 average fees, and $50.00
average fees still have far more possible use cases than a $1000 average
fee. If fees stabilize low, use cases will spring up to fill the
blockspace, unless miners arbitraily seek to keep the fees above some level.

On Thu, Mar 30, 2017 at 3:30 AM, Tom Zander via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> On Thursday, 30 March 2017 07:23:31 CEST Ryan J Martin via bitcoin-dev
> wrote:
> > The original post and the assorted limit proposals---lead me to
> > something I think is worth reiterating: assuming Bitcoin adoption
> > continues to grow at similar or accelerating rates, then eventually the
> > mempool is going to be filled with thousands of txs at all times whether
> > block limits are 1MB or 16MB
>
> This is hopefully true. :)
>
> There is an unbounded amount of demand for block space, and as such it
> doesn’t benefit anyone if the amount of free transactions get out of hand.
> Because freeloaders would definitely be able to completely suffocate
> Bitcoin.
>
> In the mail posted by OP he makes clear that this is a proposal for a hard
> fork to change the block size *limit*. The actual block size would not be
> changed at the same time, it will continue being set based on market values
> or whatever we decide between now and then.
>
> The block size itself should be set based on the amount of fees being paid
> to miners to make a block.
>
> What we want is a true fee-market where the miner can decide to make a
> block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
> on
> fees will go down!
> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.
>
> As such I don’t fear the situation where the block size limit goes up a lot
> in one go, because it is not in anyone’s interest to make the actual block
> size follow.
> --
> Tom Zander
> Blog: https://zander.github.io
> Vlog: https://vimeo.com/channels/tomscryptochannel
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
Tom Zander via bitcoin-dev
2017-03-30 21:57:59 UTC
Permalink
Raw Message
On Thursday, 30 March 2017 22:51:45 CEST Jared Lee Richardson wrote:
> Unfortunately miners could arbitrarily decide to limit blocksizes, and
> there's little except relay restrictions that everyone else could do about
> it.

No, there is a lot you and I can do about it. They call it a fee market for
a reason because you can take your money elsewhere. You can choose to not
make the transfer at all, use another crypto or just use fiat.

Bitcoin has value because we use it as money, supporess that usecase and the
value of it goes down.
--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
Aymeric Vitte via bitcoin-dev
2017-03-30 10:13:21 UTC
Permalink
Raw Message
Apparently we will not get an understanding and we will probably be told
soon that this is going off topic, so short answer

Eh --> No, maybe you would like to quote Mozilla or the W3C too, all of
those organizations are financed by the big companies and are promoting
their interests (specs, DRM, etc), then would you really trust them?

A full node does not have to validate all tx and blocks, I am not aware
of any P2P system organized with peers and intermediate nodes (with no
incentive) that did survive (diaspora for example), and the most famous
one (who btw is handling much more traffic than what you describe) is
doing well because there is an intrinsic incentive for the users, see my
comment here
https://ec.europa.eu/futurium/en/content/final-report-next-generation-internet-consultation,
surprising to see that nobody raised those issues during the consultation

Paradoxally crypto currencies allow now to reward/sustain other systems,
then probably they should concentrate first on how to reward/sustain
themselves, different ideas have surfaced to reward the full nodes but
still seem very far to materialize

Coming back again to the subject, does anyone have any idea of who are
behind the existing full nodes and how to rank them according to their
participation to the network? Up to now there has been quasi no
discussion about what are the plans for the full nodes which tends to
suggest that this is obvious


Le 30/03/2017 à 03:14, Jared Lee Richardson a écrit :
> > I have heard such theory before, it's a complete mistake to think
> that others would run full nodes to protect their business and then yours,
>
> It is a complete mistake to think that others would create a massive
> website to share huge volumes of information without any charges or
> even any advertising revenue.
>
> https://en.wikipedia.org/wiki/List_of_most_popular_websites
>
> Wikipedia, 5th largest website. Well, I guess there's some exceptions
> to the complete mistake, eh?
>
> Relying on other nodes to provide verification for certain types of
> transactions is completely acceptable. If I'm paying a friend $100,
> or paying my landlord $500, that's almost certainly totally fine.
> There's nothing that says SPV nodes can't source verifications from
> multiple places to prevent one source from being compromised. There's
> also some proposed ideas for fraud proofs that could be added, though
> I'm not familiar with how they work. If verification was a highly in
> demand service, but full nodes were expensive, companies would spring
> up that offered to verify transactions for a miniscule fee per month.
> They couldn't profit from 100 customers, but they could profit from
> 10,000 customers, and their reputation and business would rely on
> trustworthy verification services.
>
> I certainly wouldn't suggest any of those things for things like
> million dollar purchase, or a purchase where you don't know the seller
> and have no recourse if something goes wrong, or a purchase where
> failure to complete has life-altering consequences. Those
> transactions are the vast minority of transactions, but they need the
> additional security of full-node verification. Why is it unreasonable
> to ask them to pay for it, but not also ask other people who really
> don't need that security to pay for it? If a competing blockchain
> successfully offers both high security and low-fee users exactly what
> that particular user needs, they have a major advantage against one
> that only caters to one group or the other.
>
> > Running a full node is trivial and not expensive for people who know
> how to do it, even with much bigger blocks,
>
> This logic does not hold against the scale of the numbers. Worldwide
> 2015 transaction volume was 426 billion and is growing by almost 10%
> per year. In Bitcoin terms, that's 4.5 GB blocks, and approximately
> $30,000 in bandwidth a month just to run a pruning node. And there's
> almost no limit to the growth - 426 billion transactions is despite
> the fact that the majority of humans on earth are unbanked and did not
> add a single transaction to that number.
>
> I don't believe the argument that Bitcoin can serve all humans on
> earth is any more valid than the argument that any computer hardware
> should be able to run a node. Low node operational costs mean a
> proportional penalty to Bitcoin's usability, adoption, and price. Low
> transaction fee costs mean a proportional high node operational cost,
> and therefore possibly represent node vulnerabilities or verification
> insecurities.
>
> There's a balancing point in the middle somewhere that achieves the
> highest possible Bitcoin usability without putting the network at
> risk, and providing layers of security only for the transactions that
> truly need it and can justify the cost of such security.
>
>
>
> On Wed, Mar 29, 2017 at 3:33 PM, Aymeric Vitte <***@gmail.com
> <mailto:***@gmail.com>> wrote:
>
> I have heard such theory before, it's a complete mistake to think
> that others would run full nodes to protect their business and
> then yours, unless it is proven that they are decentralized and
> independent
>
> Running a full node is trivial and not expensive for people who
> know how to do it, even with much bigger blocks, assuming that the
> full nodes are still decentralized and that they don't have to
> fight against big nodes who would attract the traffic first
>
> I have posted many times here a small proposal, that exactly
> describes what is going on now, yes miners are nodes too... it's
> disturbing to see that despite of Tera bytes of BIPs, papers, etc
> the current situation is happening and that all the supposed
> decentralized system is biased by centralization
>
> Do we know what majority controls the 6000 full nodes?
>
>
> Le 29/03/2017 à 22:32, Jared Lee Richardson via bitcoin-dev a écrit :
>> > Perhaps you are fortunate to have a home computer that has more
>> than a single 512GB SSD. Lots of consumer hardware has that
>> little storage.
>>
>> That's very poor logic, sorry. Restricted-space SSD's are not a
>> cost-effective hardware option for running a node. Keeping
>> blocksizes small has significant other costs for everyone.
>> Comparing the cost of running a node under arbitrary conditons A,
>> B, or C when there are far more efficient options than any of
>> those is a very bad way to think about the costs of running a
>> node. You basically have to ignore the significant consequences
>> of keeping blocks small.
>>
>> If node operational costs rose to the point where an entire wide
>> swath of users that we do actually need for security purposes
>> could not justify running a node, that's something important for
>> consideration. For me, that translates to modern hardware that's
>> relatively well aligned with the needs of running a node -
>> perhaps budget hardware, but still modern - and above-average
>> bandwidth caps.
>>
>> You're free to disagree, but your example only makes sense to me
>> if blocksize caps didn't have serious consequences. Even if
>> those consequences are just the threat of a contentious fork by
>> people who are mislead about the real consequences, that threat
>> is still a consequence itself.
>>
>> On Wed, Mar 29, 2017 at 9:18 AM, David Vorick via bitcoin-dev
>> <bitcoin-***@lists.linuxfoundation.org
>> <mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
>>
>> Perhaps you are fortunate to have a home computer that has
>> more than a single 512GB SSD. Lots of consumer hardware has
>> that little storage. Throw on top of it standard consumer
>> usage, and you're often left with less than 200 GB of free
>> space. Bitcoin consumes more than half of that, which feels
>> very expensive, especially if it motivates you to buy another
>> drive.
>>
>> I have talked to several people who cite this as the primary
>> reason that they are reluctant to join the full node club.
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-***@lists.linuxfoundation.org
>> <mailto:bitcoin-***@lists.linuxfoundation.org>
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>>
>>
>>
>>
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-***@lists.linuxfoundation.org
>> <mailto:bitcoin-***@lists.linuxfoundation.org>
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>> <https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev>
>
> --
> Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
> <https://github.com/Ayms/zcash-wallets>
> Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
> <https://github.com/Ayms/bitcoin-wallets>
> Get the torrent dynamic blocklist: http://peersm.com/getblocklist
> Check the 10 M passwords list: http://peersm.com/findmyass
> Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
> Peersm : http://www.peersm.com
> torrent-live: https://github.com/Ayms/torrent-live
> <https://github.com/Ayms/torrent-live>
> node-Tor : https://www.github.com/Ayms/node-Tor
> <https://www.github.com/Ayms/node-Tor>
> GitHub : https://www.github.com/Ayms
>
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Jared Lee Richardson via bitcoin-dev
2017-03-29 19:46:50 UTC
Permalink
Raw Message
> When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.

Why is that a given? Is there math that outlines what the risk levels are
for various configurations of node distributions, vulnerabilities, etc?
How does one even evaluate the costs versus the benefits of node costs
versus transaction fees?

> Disk space I believe is the most significant problem today, with RAM
being the second most significant problem, and finally bandwidth
consumption as the third most important consideration. I believe that v0.14
is already too expensive on all three fronts, and that block size increases
shouldn't be considered at all until the requirements are reduced (or until
consumer hardware is better, but I believe we are talking 3-7 years of
waiting if we pick that option).

Disk space is not the largest cost, either today or in the future. Without
historical checkpointing in some fashion, bandwidth costs are more than 2
orders of magnitude higher cost than every other cost for full listening
nodes. With historical syncing discounted(i.e. pruned or nonlistening
nodes) bandwidth costs are still higher than hard drive costs.


Today: Full listening node, 133 peers, measured 1.5 TB/mo of bandwidth
consumption over two multi-day intervals. 1,500 GB/month @ ec2 low-tier
prices = $135/month, 110 GB storage = $4.95. Similar arguments extend to
consumer hardware - Comcast broadband is ~$80/mo depending on region and
comes with 1.0 TB cap in most regions, so $120/mo or even $80/mo would be
in the same ballpark. A consumer-grade 2GB hard drive is $70 and will last
for at least 2 years, so $2.93/month if the hard drive was totally
dedicated to Bitcoin and $0.16/month if we only count the percentage that
Bitcoin uses.

For a non-full listening node, ~25 peers I measured around 70 GB/month of
usage over several days, which is $6.3 per month EC2 or $5.6 proportional
Comcast cost. If someone isn't supporting syncing, there's not much point
in them not turning on pruning. Even if they didn't, a desktop in the $500
range typically comes with 1 or 2 TB of storage by default, and without
segwit or a blocksize cap increase, 3 years from now the full history will
only take up the 33% of the smaller, three year old, budget-range PC hard
drive. Even then if we assume the hard drive price declines of the last 4
years hold steady(14%, very low compared to historical gains), 330gb of
data only works out to a proportional monthly cost of $6.20 - still
slightly smaller than his bandwidth costs, and almost entirely removable by
turning on pruning since he isn't paying to help others sync.

I don't know how to evaluate the impacts of RAM or CPU usage, or
consequently electricity usage for a node yet. I'm open to quantifying any
of those if there's a method, but it seems absurd that ram could even
become a signficant factor given the abundance of cheap ram nowadays with
few programs needing it. CPU usage and thus electricity costs might become
a factor, I just don't know how to quantify it at various block scales.
Currently cpu usage isn't taxing any hardware that I run a node on in any
way I have been able to notice, not including the syncing process.

> I am also solidly unconvinced that increasing the blocksize today is a
good move, even as little as SegWit does.

The consequence of your logic that holds node operational costs down is
that transaction fees for users go up, adoption slows as various use cases
become impractical, price growth suffers, and alt coins that choose lower
fees over node cost concerns will exhibit competitive growth against
Bitcoin's crypto-currency market share. Even if you are right, that's
hardly a tradeoff not worth thoroughly investigating from every angle, the
consequences could be just as dire for Bitcoin in 10 years as it would be
if we made ourselves vulnerable.

And even if an altcoin can't take Bitcoin's dominance by lower fees, we
will not end up with millions of home users running nodes, ever. If they
did so, that would be orders of magnitude fee market competition, and
continuing increases in price, while hardware costs decline. If
transaction fees go up from space limitations, and they go up even further
in real-world terms from price increases, while node costs decline,
eventually it will cost more to send a transaction than it does to run a
node for a full month. No home users would send transactions because the
fee costs would be higher than anything they might use Bitcoin for, and so
they would not run a node for something they don't use - Why would they?
The cost of letting the ratio between node costs and transaction costs go
in the extreme favor of node costs would be worse - Lower Bitcoin
usability, adoption, and price, without any meaningful increase in security.

How do we evaluate the math on node distributions versus various attack
vectors?



On Wed, Mar 29, 2017 at 8:57 AM, David Vorick via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

>
> On Mar 29, 2017 9:50 AM, "Martin Lízner via bitcoin-dev" <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
> Im tending to believe, that HF is necessary evil now.
>
>
> I will firmly disagree. We know how to do a soft-fork blocksize increase.
> If it is decided that a block size increase is justified, we can do it with
> extension blocks in a way that achieves full backwards compatibility for
> all nodes.
>
> Barring a significant security motivation, there is no need to hardfork.
>
> I am also solidly unconvinced that increasing the blocksize today is a
> good move, even as little as SegWit does. It's too expensive for a home
> user to run a full node, and user-run full nodes are what provide the
> strongest defence against political manuveuring.
>
> When considering what block size is acceptable, the impact of running
> bitcoin in the background on affordable, non-dedicated home-hardware should
> be a top consideration.
>
> Disk space I believe is the most significant problem today, with RAM being
> the second most significant problem, and finally bandwidth consumption as
> the third most important consideration. I believe that v0.14 is already too
> expensive on all three fronts, and that block size increases shouldn't be
> considered at all until the requirements are reduced (or until consumer
> hardware is better, but I believe we are talking 3-7 years of waiting if we
> pick that option).
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Jared Lee Richardson via bitcoin-dev
2017-03-29 19:10:42 UTC
Permalink
Raw Message
In order for any blocksize increase to be agreed upon, more consensus is
needed. The proportion of users believing no blocksize increases are
needed is larger than the hardfork target core wants(95% consensus). The
proportion of users believing in microtransactions for all is also larger
than 5%, and both of those groups may be larger than 10% respectively. I
don't think either the Big-blocks faction nor the low-node-costs faction
have even a simple majority of support. Getting consensus is going to be a
big mess, but it is critical that it is done.

On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> If there should be a hard-fork, Core team should author the code. Other
> dev teams have marginal support among all BTC users.
>
> Im tending to believe, that HF is necessary evil now. But lets do it in
> conservative approach:
> - Fix historical BTC issues, improve code
> - Plan HF activation date well ahead - 12 months+
> - Allow increasing block size on year-year basis as Luke suggested
> - Compromise with miners on initial block size bump (e.g. 2MB)
> - SegWit
>
> Martin Lizner
>
> On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
>> I've proposed this hard fork approach last year in Hong Kong Consensus
>> but immediately rejected by coredevs at that meeting, after more than
>> one year it seems that lots of people haven't heard of it. So I would
>> post this here again for comment.
>>
>> The basic idea is, as many of us agree, hard fork is risky and should
>> be well prepared. We need a long time to deploy it.
>>
>> Despite spam tx on the network, the block capacity is approaching its
>> limit, and we must think ahead. Shall we code a patch right now, to
>> remove the block size limit of 1MB, but not activate it until far in
>> the future. I would propose to remove the 1MB limit at the next block
>> halving in spring 2020, only limit the block size to 32MiB which is
>> the maximum size the current p2p protocol allows. This patch must be
>> in the immediate next release of Bitcoin Core.
>>
>> With this patch in core's next release, Bitcoin works just as before,
>> no fork will ever occur, until spring 2020. But everyone knows there
>> will be a fork scheduled. Third party services, libraries, wallets and
>> exchanges will have enough time to prepare for it over the next three
>> years.
>>
>> We don't yet have an agreement on how to increase the block size
>> limit. There have been many proposals over the past years, like
>> BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
>> on. These hard fork proposals, with this patch already in Core's
>> release, they all become soft fork. We'll have enough time to discuss
>> all these proposals and decide which one to go. Take an example, if we
>> choose to fork to only 2MB, since 32MiB already scheduled, reduce it
>> from 32MiB to 2MB will be a soft fork.
>>
>> Anyway, we must code something right now, before it becomes too late.
>> _______________________________________________
>> bitcoin-dev mailing list
>> bitcoin-***@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
praxeology_guy via bitcoin-dev
2017-03-29 19:36:25 UTC
Permalink
Raw Message
I think at least the three following things have to be done before the block size can be increased by any significant amount:
1. A network protocol defined UTXO snapshot format be defined, UTXO snapshots being created automatically in a deterministic periodic and low-cost fashion. Ability to synchronize starting from such a UTXO snapshot as requested by a user.
2. SPV support from a pruned node that has the latest UTXO snapshot. Probably requires committing the UTXO snapshot hash to the block.
3. Given the above fixes the problem of needing full block chain history storage, and people are comfortable with such a security model, a good portion of the network can switch to this security model, and still satisfy our desire for the system to be sufficiently distributed. This requires lots of testing.
4. More current studies on the effect of increasing the block size on synchronizing node drop out due to other reasons such as network bandwidth, memory, and CPU usage.

Without doing the above, scheduling to increasing the block size would be wreckless.

Cheers,
Praxeology Guy

-------- Original Message --------
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Local Time: March 29, 2017 2:10 PM
UTC Time: March 29, 2017 7:10 PM
From: bitcoin-***@lists.linuxfoundation.org
To: Martin Lízner <***@gmail.com>, Bitcoin Protocol Discussion <bitcoin-***@lists.linuxfoundation.org>

In order for any blocksize increase to be agreed upon, more consensus is needed. The proportion of users believing no blocksize increases are needed is larger than the hardfork target core wants(95% consensus). The proportion of users believing in microtransactions for all is also larger than 5%, and both of those groups may be larger than 10% respectively. I don't think either the Big-blocks faction nor the low-node-costs faction have even a simple majority of support. Getting consensus is going to be a big mess, but it is critical that it is done.

On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:

If there should be a hard-fork, Core team should author the code. Other dev teams have marginal support among all BTC users.

Im tending to believe, that HF is necessary evil now. But lets do it in conservative approach:
- Fix historical BTC issues, improve code
- Plan HF activation date well ahead - 12 months+
- Allow increasing block size on year-year basis as Luke suggested
- Compromise with miners on initial block size bump (e.g. 2MB)
- SegWit

Martin Lizner

On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.

The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.

Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.

With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.

We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.

Anyway, we must code something right now, before it becomes too late.
Staf Verhaegen via bitcoin-dev
2017-04-02 19:12:06 UTC
Permalink
Raw Message
Jared Lee Richardson via bitcoin-dev schreef op wo 29-03-2017 om 12:10
[-0700]:
> The proportion of users believing in microtransactions for all is also
> larger than 5%,

In order to evaluate this statement the definition of microtransaction
has to be defined. I guess there will also be no consensus on that...

greets,
Staf.
Raystonn . via bitcoin-dev
2017-03-29 19:50:48 UTC
Permalink
Raw Message
Low node costs are a good goal for nodes that handle transactions the node operator can afford. Nobody is going to run a node for a network they do not use for their own transactions. If transactions have fees that prohibit use for most economic activity, that means node count will drop until nodes are generally run by those who settle large amounts. That is very centralizing.

Raystonn

On 29 Mar 2017 12:14 p.m., Jared Lee Richardson via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:
In order for any blocksize increase to be agreed upon, more consensus is needed. The proportion of users believing no blocksize increases are needed is larger than the hardfork target core wants(95% consensus). The proportion of users believing in microtransactions for all is also larger than 5%, and both of those groups may be larger than 10% respectively. I don't think either the Big-blocks faction nor the low-node-costs faction have even a simple majority of support. Getting consensus is going to be a big mess, but it is critical that it is done.

On Wed, Mar 29, 2017 at 12:49 AM, Martin Lízner via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
If there should be a hard-fork, Core team should author the code. Other dev teams have marginal support among all BTC users.

Im tending to believe, that HF is necessary evil now. But lets do it in conservative approach:
- Fix historical BTC issues, improve code
- Plan HF activation date well ahead - 12 months+
- Allow increasing block size on year-year basis as Luke suggested
- Compromise with miners on initial block size bump (e.g. 2MB)
- SegWit

Martin Lizner

On Tue, Mar 28, 2017 at 6:59 PM, Wang Chun via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>> wrote:
I've proposed this hard fork approach last year in Hong Kong Consensus
but immediately rejected by coredevs at that meeting, after more than
one year it seems that lots of people haven't heard of it. So I would
post this here again for comment.

The basic idea is, as many of us agree, hard fork is risky and should
be well prepared. We need a long time to deploy it.

Despite spam tx on the network, the block capacity is approaching its
limit, and we must think ahead. Shall we code a patch right now, to
remove the block size limit of 1MB, but not activate it until far in
the future. I would propose to remove the 1MB limit at the next block
halving in spring 2020, only limit the block size to 32MiB which is
the maximum size the current p2p protocol allows. This patch must be
in the immediate next release of Bitcoin Core.

With this patch in core's next release, Bitcoin works just as before,
no fork will ever occur, until spring 2020. But everyone knows there
will be a fork scheduled. Third party services, libraries, wallets and
exchanges will have enough time to prepare for it over the next three
years.

We don't yet have an agreement on how to increase the block size
limit. There have been many proposals over the past years, like
BIP100, 101, 102, 103, 104, 105, 106, 107, 109, 148, 248, BU, and so
on. These hard fork proposals, with this patch already in Core's
release, they all become soft fork. We'll have enough time to discuss
all these proposals and decide which one to go. Take an example, if we
choose to fork to only 2MB, since 32MiB already scheduled, reduce it
from 32MiB to 2MB will be a soft fork.

Anyway, we must code something right now, before it becomes too late.
_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


_______________________________________________
bitcoin-dev mailing list
bitcoin-***@lists.linuxfoundation.org<mailto:bitcoin-***@lists.linuxfoundation.org>
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Tom Zander via bitcoin-dev
2017-03-30 10:34:45 UTC
Permalink
Raw Message
On Wednesday, 29 March 2017 21:50:48 CEST Raystonn . via bitcoin-dev wrote:
> Low node costs are a good goal for nodes that handle transactions the node
> operator can afford. Nobody is going to run a node for a network they do
> not use for their own transactions. If transactions have fees that
> prohibit use for most economic activity, that means node count will drop
> until nodes are generally run by those who settle large amounts. That is
> very centralizing.
>
> Raystonn

The idea that people won’t run a node for a network they don’t use for their
own transactions is a very good observation and a good reason to get on-
chain scaling happening well before lightning hits.

--
Tom Zander
Blog: https://zander.github.io
Vlog: https://vimeo.com/channels/tomscryptochannel
David Vorick via bitcoin-dev
2017-03-30 11:19:19 UTC
Permalink
Raw Message
> What we want is a true fee-market where the miner can decide to make a
block
> smaller to get people to pay more fees, because if we were to go to 16MB
> blocks in one go, the cost of the miner would go up, but his reward based
on
> fees will go down!
> A block so big that 100% of the transactions will always be mined in the
> next block will just cause a large section of people to no longer feel the
> need to pay fees.

> As such I don’t fear the situation where the block size limit goes up a
lot
> in one go, because it is not in anyone’s interest to make the actual block
> size follow.

There have been attacks demonstrated where a malicious miner with
sufficient hashrate can leverage large blocks to exacerbate selfish mining.
Adversarial behaviors from miners need to be considered, it's not safe to
simply assume that a miner won't have reasons to attack the network. We
already know that large empty blocks (rather, blocks with fake
transactions) can be leveraged in ways that both damages the network and
increases miner profits.

In general, fear of other currencies passing Bitcoin is unsubstantiated.
Bitcoin has by far the strongest development team, and also is by far the
most decentralized. To the best of my knowledge, Bitcoin is the only
cryptocurrency out there that is both not-dead and also lacks a strong
central leadership.

A coin like ethereum may even be able to pass Bitcoin in market cap. But
that's okay. Ethereum has very different properties and it's not something
I would trust as a tool to provide me with political sovereignty. Ethereum
passing Bitcoin in market cap does not mean that it has proved superior to
Bitcoin. It could just mean that enterprises are really excited about
permissioned blockchains. That's not interesting to me at any market cap.

Bitcoin's core value add is and should continue to be decentralization and
trustlessness. Nobody is remotely close to competing with Bitcoin on those
fronts, and in my mind that's far more important than any of the other
mania anyway.
Jared Lee Richardson via bitcoin-dev
2017-03-30 21:42:31 UTC
Permalink
Raw Message
> There have been attacks demonstrated where a malicious miner with
sufficient hashrate can leverage large blocks to exacerbate selfish mining.

Can you give me a link to this? Having done a lot of mining, I really
really doubt this. I'm assuming the theory relies upon propagation times
and focuses on small miners versus large ones, but that's wrong.
Propagation times don't affect small miners disproportionately, though they
might affect small POOLS disproportionately, that isn't the same thing at
all. No miner since at least 2014 has operated a full node directly with
each miner - it is incredibly impractical to do so. They retrieve only the
merkle root hash and other parameters from the stratum server, which is a
very small packet and does not increase with the size of the blocks. If
they really want to select which transactions to include, some pools offer
options of that sort(or can, I believe) but almost no one does. If they
don't like how their pool picks transactions, they'll use a different pool,
that simple.

If there's some other theory about a miner exploiting higher blocksizes
selfishly then I'd love to read up on it to understand it. If what
you/others actually meant by that was smaller "pools," that's a much much
smaller problem. Pools don't earn major profits and generally are at the
mercy of their miners if they make bad choices or can't fix low
performance. For pools, block propagation time was a major major issue
even before blocks were full, and latency + packet loss between mining
units and the pool is also a big concern. I was seeing occasional block
propagation delays(over a minute) on a fiber connection in 2013/4 due to
minute differences in peering. If a pool can't afford enough bandwidth to
keep propagation times down, they can't be a pool. Bigger blocksizes will
make it so they even more totally-can't-be-a-pool, but they already can't
be a pool, so who cares. Plus, compact blocks should have already solve
nearly all of this problem as I understand it.

So definitely want to know more if I'm misunderstanding the attack vector.

> We already know that large empty blocks (rather, blocks with fake
transactions) can be leveraged in ways that both damages the network and
increases miner profits.

Maybe you're meaning an attack where other pools get stuck on validation
due to processing issues? This is also a nonissue. The smallest viable
pool has enough difficulties with other, non-hardware related issues that
buying the largest, beefiest standard processor available with ample RAM
won't even come up on the radar. No one cares about $600 in hardware
versus $1000 in hardware when it takes you 6 weeks to get your peering and
block propagation configuration just right and another 6 months to convince
miners to substantially use your pool.

If you meant miners and not pools, that's also wrong. Mining hardware
doesn't validate blocks anymore, it hasn't been practical for years. They
only get the merkle root hash of the valid transaction set. The pool
handles the rest.

> In general, fear of other currencies passing Bitcoin is unsubstantiated.
Bitcoin has by far the strongest development team, and also is by far the
most decentralized.

Markets only care a little bit what your development team is like.
Ethereum has Vitalik, who is an incredibly smart and respectable dude,
while BU absolutely hates the core developers right now. Markets are more
likely to put more faith in a single leader than core right now if that
comparison was really made.

"Most decentralized" is nearly impossible to quantify, and has almost no
value to speculators. Since all of these markets are highly speculative,
they only care about future demand. Future demand relies upon future use.
Unsubstantiated? Ethereum is already 28% of Bitcoin by cap and 24% by
trading. Four months ago that was 4%. Their transaction volume also
doubled. What world are you living in?

> A coin like ethereum may even be able to pass Bitcoin in market cap. But
that's okay. Ethereum has very different properties and it's not something
I would trust as a tool to provide me with political sovereignty.

Well great, I guess so long as you're ok with it we'll just roll with it.
Wait, no. If Bitcoin loses its first-mover network effect, a small cadre
of die-hard libertarians are not going to be able to keep it from becoming
a page in the history books. Die hard libertarians can barely keep a voice
in the U.S. congress - neither markets nor day-to-day users particularly
care about the philosophy, they care about what it can do for them.

> Ethereum passing Bitcoin in market cap does not mean that it has proved
superior to Bitcoin.

The markets have literally told us why Ethereum is shooting up. Its
because the Bitcoin community has fractured around a debate with nearly no
progress on a solution for the last 3 years, and especially because BU
appears to be strong enough to think they can fork and the markets know
full well what a contentious fork will do to Bitcoin's near-term future.

> It could just mean that enterprises are really excited about permissioned
blockchains.

Then it would have happened not when the BU situation imploded but when
Microsoft announced they were working with Ethereum on things like that.
No one cared about Microsoft's announcement. You don't seriously believe
what you're saying, do you?

> That's not interesting to me at any market cap.

I agree with you, but Bitcoin becoming a page in the history books because
a few die-hard libertarians didn't think price or adoption was important is
a big, big concern, especially when they almost have veto power. Markets
don't care about philosophy, they care about future value. Bitcoin has
value because we think it may be the most useful new innovation in the
future. If we screw that future usefulness up, philosophy gives us no more
value than Friendster has today.

On Thu, Mar 30, 2017 at 4:19 AM, David Vorick via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> > What we want is a true fee-market where the miner can decide to make a
> block
> > smaller to get people to pay more fees, because if we were to go to 16MB
> > blocks in one go, the cost of the miner would go up, but his reward
> based on
> > fees will go down!
> > A block so big that 100% of the transactions will always be mined in the
> > next block will just cause a large section of people to no longer feel
> the
> > need to pay fees.
>
> > As such I don’t fear the situation where the block size limit goes up a
> lot
> > in one go, because it is not in anyone’s interest to make the actual
> block
> > size follow.
>
> There have been attacks demonstrated where a malicious miner with
> sufficient hashrate can leverage large blocks to exacerbate selfish mining.
> Adversarial behaviors from miners need to be considered, it's not safe to
> simply assume that a miner won't have reasons to attack the network. We
> already know that large empty blocks (rather, blocks with fake
> transactions) can be leveraged in ways that both damages the network and
> increases miner profits.
>
> In general, fear of other currencies passing Bitcoin is unsubstantiated.
> Bitcoin has by far the strongest development team, and also is by far the
> most decentralized. To the best of my knowledge, Bitcoin is the only
> cryptocurrency out there that is both not-dead and also lacks a strong
> central leadership.
>
> A coin like ethereum may even be able to pass Bitcoin in market cap. But
> that's okay. Ethereum has very different properties and it's not something
> I would trust as a tool to provide me with political sovereignty. Ethereum
> passing Bitcoin in market cap does not mean that it has proved superior to
> Bitcoin. It could just mean that enterprises are really excited about
> permissioned blockchains. That's not interesting to me at any market cap.
>
> Bitcoin's core value add is and should continue to be decentralization and
> trustlessness. Nobody is remotely close to competing with Bitcoin on those
> fronts, and in my mind that's far more important than any of the other
> mania anyway.
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Aymeric Vitte via bitcoin-dev
2017-03-30 11:24:53 UTC
Permalink
Raw Message
Except if people have some incentive to do it, simple example: I have
some servers, they are doing some work but are not so busy finally, I
can decide to run some nodes, this does not cost me more (and less for
the planet than setting up new servers) and I get some rewards (as an
illustration of this my servers are mining zcash and running zcash
nodes, this is of course absolutely not profitable but since this does
not disturb what the servers are primarly intended for and I get some
small zecs with no additionnal costs, why not doing it?) Of course we
can then consider that people doing this are finally using the network...


Le 30/03/2017 à 12:34, Tom Zander via bitcoin-dev a écrit :
> On Wednesday, 29 March 2017 21:50:48 CEST Raystonn . via bitcoin-dev wrote:
>> Low node costs are a good goal for nodes that handle transactions the node
>> operator can afford. Nobody is going to run a node for a network they do
>> not use for their own transactions. If transactions have fees that
>> prohibit use for most economic activity, that means node count will drop
>> until nodes are generally run by those who settle large amounts. That is
>> very centralizing.
>>
>> Raystonn
> The idea that people won’t run a node for a network they don’t use for their
> own transactions is a very good observation and a good reason to get on-
> chain scaling happening well before lightning hits.
>

--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Daniele Pinna via bitcoin-dev
2017-03-29 19:33:58 UTC
Permalink
Raw Message
What about periodically committing the entire UTXO set to a special
checkpoint block which becomes the new de facto Genesis block?

Daniele

------------------------------

Message: 5
Date: Wed, 29 Mar 2017 16:41:29 +0000
From: Andrew Johnson <***@gmail.com>
To: David Vorick <***@gmail.com>
Cc: Bitcoin Dev <bitcoin-***@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Message-ID:
<CAAy62_+JtoAuM-RsrAAp5eiGiO+***@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"

I believe that as we continue to add users to the system by scaling
capacity that we will see more new nodes appear, but I'm at a bit of a loss
as to how to empirically prove it.

I do see your point on increasing load on archival nodes, but the majority
of that load is going to come from new nodes coming online, they're the
only ones going after very old blocks. I could see that as a potential
attack vector, overwhelm the archival nodes by spinning up new nodes
constantly, therefore making it difficult for a "real" new node to get up
to speed in a reasonable amount of time.

Perhaps the answer there would be a way to pay an archival node a small
amount of bitcoin in order to retrieve blocks older than a certain cutoff?
Include an IP address for the node asking for the data as metadata in the
transaction... Archival nodes could set and publish their own policy, let
the market decide what those older blocks are worth. Would also help to
incentivize running archival node, which we do need. Of course, this isn't
very user friendly.

We can take this to bitcoin-discuss, if we're getting too far off topic.


On Wed, Mar 29, 2017 at 11:25 AM David Vorick <***@gmail.com>
wrote:

>
> On Mar 29, 2017 12:20 PM, "Andrew Johnson" <***@gmail.com>
> wrote:
>
> What's stopping these users from running a pruned node? Not every node
> needs to store a complete copy of the blockchain.
>
>
> Pruned nodes are not the default configuration, if it was the default
> configuration then I think you would see far more users running a pruned
> node.
>
> But that would also substantially increase the burden on archive nodes.
>
>
> Further discussion about disk space requirements should be taken to
> another thread.
>
>
> --
Andrew Johnson
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/
attachments/20170329/9b48ebe3/attachment.html>

------------------------------
David Vorick via bitcoin-dev
2017-03-29 20:28:35 UTC
Permalink
Raw Message
> > When considering what block size is acceptable, the impact of running
bitcoin in the background on affordable, non-dedicated home-hardware should
be a top consideration.

> Why is that a given? Is there math that outlines what the risk levels
are for various configurations of node distributions, vulnerabilities,
etc? How does one even evaluate the costs versus the benefits of node
costs versus transaction fees?

It's a political assessment. Full nodes are the ultimate arbiters of
consensus. When a contentious change is suggested, only the full nodes have
the power to either accept or reject this contentious change. If home users
are not running their own full nodes, then home users have to trust and
rely on other, more powerful nodes to represent them. Of course, the more
powerful nodes, simply by nature of having more power, are going to have
different opinions and objectives from the users. And it's impossible for
5000 nodes to properly represent the views of 5,000,000 users. Users
running full nodes is important to prevent political hijacking of the
Bitcoin protocol. Running a full node yourself is the only way to guarantee
(in the absence of trust - which Bitcoin is all about eliminating trust)
that changes you are opposed to are not introduced into the network.

> Disk space is not the largest cost, either today or in the future.
Without historical checkpointing in some fashion, bandwidth costs are more
than 2 orders of magnitude higher cost than every other cost for full
listening nodes.

This statement is not true for home users, it is true for datacenter nodes.
For home users, 200 GB of bandwidth and 500 GB of bandwidth largely have
the exact same cost. I pay a fixed amount of money for my internet, and if
I use 500 GB the cost is identical to if I use 200 GB. So long as bandwidth
is kept under my home bandwidth cap, bandwidth for home nodes is _free_.

Similarly, disk space may only be $2/TB in bulk, but as a home user I have
a $1000 computer with 500 GB of total storage, 100 GB seems
(psychologically) to cost a lot closer to $200 than to $2. And if I go out
and buy an extra drive to support Bitcoin, it's going to cost about $50 no
matter what drive I pick, because that's just how much you have to spend to
get a drive. The fact that I get an extra 900 GB that I'm not using is
irrelevant - I spent $50 explicitly so I could run a bitcoin node.

The financials of home nodes follow a completely different math than the
costs you are citing by quoting datacenter prices.

> I don't know how to evaluate the impacts of RAM or CPU usage, or
consequently electricity usage for a node yet. I'm open to quantifying any
of those if there's a method, but it seems absurd that ram could even
become a signficant factor given the abundance of cheap ram nowadays with
few programs needing it.

Many home machines only have 4GB of RAM. (I am acutely aware of this
because my own software consumes about 3.5GB of RAM, which means all of our
users stuck at 4 GB cannot use my software and Chrome at the same time).
0.14 uses more than 1 GB of RAM. This I think is not really a problem for
most people, but it becomes a problem if the amount of RAM required grows
enough that they can't have all of their programs open at the same time.
1GB I think is really the limit you'd want to have before you'd start
seeing users choose not to run nodes simply because they'd rather have 300
tabs open instead.

CPU usage I think is pretty minimal. Your node is pretty busy during IBD
which is annoying but tolerable. And during normal usage a user isn't even
going to notice. Same for electricity. They aren't going to notice at the
end of the month if their electricity bill is a dollar higher because of
Bitcoin.

> The consequence of your logic that holds node operational costs down is
that transaction fees for users go up, adoption slows as various use cases
become impractical, price growth suffers, and alt coins that choose lower
fees over node cost concerns will exhibit competitive growth against
Bitcoin's crypto-currency market share. Even if you are right, that's
hardly a tradeoff not worth thoroughly investigating from every angle, the
consequences could be just as dire for Bitcoin in 10 years as it would be
if we made ourselves vulnerable.

This is very much worth considering. If transaction fees are so high that
there is no use case at all for people unwilling to buy extra hardware for
Bitcoin (a dedicated node or whatever), then there is no longer a reason to
worry about these people as users. However, I think the fees would have to
get in the $50 range for that to start to be the case. When talking about
emergency funds - that is, $10k+ that you keep in case your government
defaults, hyperinflates, seizes citizen assets, etc. etc. (situations that
many Bitcoin users today have to legitimately worry about), then you are
going to be making a few transactions per year at most, and the cost of
fees on a home node may be $150 / yr, while the cost of dedicated hardware
might be $150/yr ($600 box amortized over 4 years). We are two orders of
magnitude away from this type of fee pressure, so I think it continues to
make sense to be considering the home nodes as the target that we want to
hit.

> What about periodically committing the entire UTXO set to a special
checkpoint block which becomes the new de facto Genesis block?

This should be discussed in another thread but I don't think I'm alone in
saying that I think this could actually be done in a secure / safe /
valuable way if you did it correctly. It would reduce bandwidth pressure on
archive nodes, reduce disk pressure on full nodes, and imo make for a more
efficient network overall.
Jared Lee Richardson via bitcoin-dev
2017-03-29 22:08:33 UTC
Permalink
Raw Message
> It's a political assessment. Full nodes are the ultimate arbiters of
consensus.

That's not true unless miners are thought of as the identical to nodes,
which is has not been true for nearly 4 years now. Nodes arbitrating a
consensus the BU theory - that nodes can restrain miners - but it doesn't
work. If miners were forked off from nonminers, the miner network could
keep their blockchain operational under attack from the nodes far better
than nodes could keep their blockchain operational under attack from the
miners. The miners could effectively grind the node network to a complete
halt and probably still run their own fork unimpeded at the same time.
This would continue until the the lack of faith in the network drove the
miners out of business economically, or until the node network capitulated
and followed the rules of the miner network.

The reason BU isn't a dire threat is that there's a great rift between the
miners just like there is between the average users, just as satoshi
intended, and that rift gives the user network the economic edge.

> If home users are not running their own full nodes, then home users have
to trust and rely on other, more powerful nodes to represent them. Of
course, the more powerful nodes, simply by nature of having more power, are
going to have different opinions and objectives from the users.

I think you're conflating mining with node operation here. Node users only
power is to block the propagation of certain things. Since miners also
have a node endpoint, they can cut the node users out of the equation by
linking with eachother directly - something they already do out of
practicality for propagation. Node users do not have the power to
arbitrate consensus, that is why we have blocks and PoW.

> And it's impossible for 5000 nodes to properly represent the views of
5,000,000 users. Users running full nodes is important to prevent political
hijacking of the Bitcoin protocol. [..] that changes you are opposed to
are not introduced into the network.

This isn't true. Non-miner nodes cannot produce blocks. Their opinion is
not represented in the blockchain in any way, the blockchain is entirely
made up of blocks. They can commit transactions, but the transactions must
follow an even stricter set of rules and short of a user activated PoW
change, the miners get to decide. It might be viable for us to introduce
ways for transactions to vote on things, but that also isn't nodes voting -
that's money voting.

Bitcoin is structured such that nodes have no votes because nodes cannot be
trusted. They don't inherently represent individuals, they don't
inherently represent value, and they don't commit work that is played
against eachother to achieve a game theory equilibrium. That's miners.

> This statement is not true for home users, it is true for datacenter
nodes. For home users, 200 GB of bandwidth and 500 GB of bandwidth largely
have the exact same cost.

Your assumption is predicated upon the idea that users pay a fixed cost for
any volume of bandwidth. That assertion is true for some users but not
true for others, and it is becoming exceedingly less true in recent years
with the addition of bandwidth caps by many ISP's. Even users without a
bandwidth cap can often get a very threatening letter if they were to max
their connection 24/7. Assuming unlimited user bandwidth in the future and
comparing that with limited datacenter bandwidth is extremely short
sighted. Fundamentally, if market forces have established that datacenter
bandwidth costs $0.09 per GB, what makes you think that ISP's don't have to
deal with the same limitations? They do, the difference is that $0.09 per
GB times the total usage across the ISP's customer base is far, far lower
than $80 times the number of customers. The more that a small group of
customers deviating wildly becomes a problem for them, the more they will
add bandwidth caps or send threatening letters or even rate-limit or stop
serving those users.

Without that assumption, your math and examples fall apart - Bandwidth
costs for full archival nodes are nearly 50 times higher than storage costs
no matter whether they are at home or in a datacenter.

> The financials of home nodes follow a completely different math than the
costs you are citing by quoting datacenter prices.

No, they really aren't without your assumption. Yes, they are somewhat
different - If someone has a 2TB hard drive but only ever uses 40% of it,
the remaining hard drive space would have a cost of zero. Those specific
examples break down when you average over several years and fifty thousand
users. If that same user was running a bitcoin node and hard drive space
was indeed a concern, they would factor that desire into the purchase of
their next computer, preferring those with larger hard drives. That
reintroduces the cost with the same individual who had no cost before. The
cost difference doesn't work out to the exact same numbers as the
datacenter costs, who have a better economy of scale but also have profit
and business overhead, but all of the math I've done indicates that over
thousands of individuals and several years of time, the costs land in the
same ballpark. For example - Comcast bandwidth cap = 1000gb @ ~$80/month.
$0.08/GB. Amazon's first tier is currently $0.09. Much closer than I
even expected before I worked out the math. I'm open to being proven wrong.

> 0.14 uses more than 1 GB of RAM.

I'm running 0.13.2 and only see 300 mb of ram. Why is 0.14 using three
times the ram?

> 1GB I think is really the limit you'd want to have before you'd start
seeing users choose not to run nodes simply

Again, while I sympathize with the concept, I don't believe holding the
growth of the entire currency back based on minimum specs is a fair
tradeoff. The impact on usecases that depend on a given fee level is total
obliteration. That's unavoidable for things like microtransactions, but a
fee level of $1/tx allows for hundreds of opportunities that a fee level of
$100/tx does not. That difference may be the deciding factor in the
network effect between Bitcoin and a competitor altcoin. Bitcoin dying out
because a better-operated coin steals its first-mover advantage is just as
bad as bitcoin dying out because an attacker halted tx propagation and
killed the network. Probably even worse - First mover advantages are
almost never retaken, but the network could recover from a peering attack
with software changes and community/miner responses.

> However, I think the fees would have to get in the $50 range for that to
start to be the case.

I calculated this out. If blocksizes aren't increased, but price increases
continue as they have in the last 3-5 years, per-node operational costs for
one month drop from roughly $10-15ish (using datacenter numbers, which you
said would be higher than home user numbers and might very well be when
amortized thoroughly) down to $5-8 in less than 8 years. If transaction
fees don't rise at all due to blockspace competition (i.e., they offset
only the minimum required for miners to economically protect Bitcoin),
they'll be above $10 in less than 4 years. I believe that comparing
1-month of node operational costs versus 1 transaction fee is a reasonable,
albeit imperfect, comparison of when users will stop caring.

That's not very far in the future at all, and fee-market competition will
probably be much, much worse for us and better for miners.

> When talking about emergency funds - that is, $10k+ that you keep in case
your government defaults, hyperinflates, seizes citizen assets, etc. etc.
(situations that many Bitcoin users today have to legitimately worry about),

So I don't mean to be rude here, but this kind of thinking is very poor
logic when applied to anyone who isn't already a libertarian Bitcoin
supporter. By anyone outside the Bitcoin world's estimation, Bitcoin is an
extremely high risk, unreliable store of value. We like to compare it to
"digital gold" because of the parameters that Satoshi chose, but saying it
does not make it true. For someone not already a believer, Bitcoin is a
risky, speculative investment into a promising future technology, and gold
is a stable physical asset with 4,000 years of acceptance history that has
the same value in nearly every city on the planet. Bitcoin is difficult to
purchase and difficult to find someone to exchange for goods or services.

Could Bitcoin become more like what you described in the future? A lot of
us hope so or we wouldn't be here right now. But in the meantime, any
other crypto currency that choses parameters similar to gold could eclipse
Bitcoin if we falter. If their currency is more usable because they
balance the ratio of node operational costs/security versus transaction
fees/usability, they have a pretty reasonable chance of doing so. And then
you won't store your $10k+ in bitcoin, you'll store in $altcoin. The
market doesn't really care who wins.

> We are two orders of magnitude away from this type of fee pressure, so I
think it continues to make sense to be considering the home nodes as the
target that we want to hit.

That's nothing, we've never had any fee competition at all until basically
November of last year. From December to March transaction fees went up by
250%, and they doubled from May to December before that. Transactions per
year are up 80% per year for the last 4 years. Things are about to get
screwed.


On Wed, Mar 29, 2017 at 1:28 PM, David Vorick via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> > > When considering what block size is acceptable, the impact of running
> bitcoin in the background on affordable, non-dedicated home-hardware should
> be a top consideration.
>
> > Why is that a given? Is there math that outlines what the risk levels
> are for various configurations of node distributions, vulnerabilities,
> etc? How does one even evaluate the costs versus the benefits of node
> costs versus transaction fees?
>
> It's a political assessment. Full nodes are the ultimate arbiters of
> consensus. When a contentious change is suggested, only the full nodes have
> the power to either accept or reject this contentious change. If home users
> are not running their own full nodes, then home users have to trust and
> rely on other, more powerful nodes to represent them. Of course, the more
> powerful nodes, simply by nature of having more power, are going to have
> different opinions and objectives from the users. And it's impossible for
> 5000 nodes to properly represent the views of 5,000,000 users. Users
> running full nodes is important to prevent political hijacking of the
> Bitcoin protocol. Running a full node yourself is the only way to guarantee
> (in the absence of trust - which Bitcoin is all about eliminating trust)
> that changes you are opposed to are not introduced into the network.
>
> > Disk space is not the largest cost, either today or in the future.
> Without historical checkpointing in some fashion, bandwidth costs are more
> than 2 orders of magnitude higher cost than every other cost for full
> listening nodes.
>
> This statement is not true for home users, it is true for datacenter
> nodes. For home users, 200 GB of bandwidth and 500 GB of bandwidth largely
> have the exact same cost. I pay a fixed amount of money for my internet,
> and if I use 500 GB the cost is identical to if I use 200 GB. So long as
> bandwidth is kept under my home bandwidth cap, bandwidth for home nodes is
> _free_.
>
> Similarly, disk space may only be $2/TB in bulk, but as a home user I have
> a $1000 computer with 500 GB of total storage, 100 GB seems
> (psychologically) to cost a lot closer to $200 than to $2. And if I go out
> and buy an extra drive to support Bitcoin, it's going to cost about $50 no
> matter what drive I pick, because that's just how much you have to spend to
> get a drive. The fact that I get an extra 900 GB that I'm not using is
> irrelevant - I spent $50 explicitly so I could run a bitcoin node.
>
> The financials of home nodes follow a completely different math than the
> costs you are citing by quoting datacenter prices.
>
> > I don't know how to evaluate the impacts of RAM or CPU usage, or
> consequently electricity usage for a node yet. I'm open to quantifying any
> of those if there's a method, but it seems absurd that ram could even
> become a signficant factor given the abundance of cheap ram nowadays with
> few programs needing it.
>
> Many home machines only have 4GB of RAM. (I am acutely aware of this
> because my own software consumes about 3.5GB of RAM, which means all of our
> users stuck at 4 GB cannot use my software and Chrome at the same time).
> 0.14 uses more than 1 GB of RAM. This I think is not really a problem for
> most people, but it becomes a problem if the amount of RAM required grows
> enough that they can't have all of their programs open at the same time.
> 1GB I think is really the limit you'd want to have before you'd start
> seeing users choose not to run nodes simply because they'd rather have 300
> tabs open instead.
>
> CPU usage I think is pretty minimal. Your node is pretty busy during IBD
> which is annoying but tolerable. And during normal usage a user isn't even
> going to notice. Same for electricity. They aren't going to notice at the
> end of the month if their electricity bill is a dollar higher because of
> Bitcoin.
>
> > The consequence of your logic that holds node operational costs down is
> that transaction fees for users go up, adoption slows as various use cases
> become impractical, price growth suffers, and alt coins that choose lower
> fees over node cost concerns will exhibit competitive growth against
> Bitcoin's crypto-currency market share. Even if you are right, that's
> hardly a tradeoff not worth thoroughly investigating from every angle, the
> consequences could be just as dire for Bitcoin in 10 years as it would be
> if we made ourselves vulnerable.
>
> This is very much worth considering. If transaction fees are so high that
> there is no use case at all for people unwilling to buy extra hardware for
> Bitcoin (a dedicated node or whatever), then there is no longer a reason to
> worry about these people as users. However, I think the fees would have to
> get in the $50 range for that to start to be the case. When talking about
> emergency funds - that is, $10k+ that you keep in case your government
> defaults, hyperinflates, seizes citizen assets, etc. etc. (situations that
> many Bitcoin users today have to legitimately worry about), then you are
> going to be making a few transactions per year at most, and the cost of
> fees on a home node may be $150 / yr, while the cost of dedicated hardware
> might be $150/yr ($600 box amortized over 4 years). We are two orders of
> magnitude away from this type of fee pressure, so I think it continues to
> make sense to be considering the home nodes as the target that we want to
> hit.
>
> > What about periodically committing the entire UTXO set to a special
> checkpoint block which becomes the new de facto Genesis block?
>
> This should be discussed in another thread but I don't think I'm alone in
> saying that I think this could actually be done in a secure / safe /
> valuable way if you did it correctly. It would reduce bandwidth pressure on
> archive nodes, reduce disk pressure on full nodes, and imo make for a more
> efficient network overall.
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Luv Khemani via bitcoin-dev
2017-03-30 07:11:21 UTC
Permalink
Raw Message
>> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.

>I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.

You are only looking at technical aspects and missing the political aspect.

Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.

If it's still difficult to grasp the above paragraph, ask yourself the following questions,
- What makes Bitcoin uncensorable
- What gives confidence that the 21 million limit will be upheld
- What makes transactions irreversible
- If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?

The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).

Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
Jared Lee Richardson via bitcoin-dev
2017-03-30 17:16:41 UTC
Permalink
Raw Message
> You are only looking at technical aspects and missing the political
aspect.

Nodes don't do politics. People do, and politics is a lot larger with a
lot more moving parts than just node operation.

> full nodes protect the user from the change of any properties of Bitcoin
which they do not agree with.

Full nodes protect from nothing if the chain they attempt to use is
nonfunctional.

> The ability to retain this power for users is of prime importance and is
arguably what gives Bitcoin most of it's value
> Any increase in the cost to run a full node is an increase in cost to
maintain monetary sovereignty

This power is far more complicated than just nodes. You're implying that
node operation == political participation. Node operation is only a very
small part of the grand picture of the bitcoin balance of power.

> The ability for a user to run a node is what keeps the miners honest and
prevents them from rewriting any of Bitcoin's rules.

No, it isn't. Nodes disagreeing with miners is necessary but not
sufficient to prevent that. Nodes can't utilize a nonfunctional chain, nor
can they utilize a coin with no exchanges.

> What makes Bitcoin uncensorable

Only two things - 1. Node propagation being strong enough that a target
node can't be surrounded by attacker nodes (or so that attacker nodes can't
segment honest nodes), and 2. Miners being distributed in enough countries
and locations to avoid any single outside attacker group from having enough
leverage to prevent transaction inclusion, and miners also having enough
incentives(philosophical or economic) to refuse to collude towards
transaction exclusion.

Being able to run a node yourself has no real effect on either of the two.
Either we have enough nodes that an attacker can't segment the network or
we don't.

> What gives confidence that the 21 million limit will be upheld

What you're describing would result in a fork war. The opposition to this
would widespread and preventing an attempt relies upon mutual destruction.
If users refused to get on board, exchanges would follow users. If miners
refused to get on board, the attempt would be equally dead in the water.
It would require a majority of users, businesses and miners to change the
limit; Doing so without an overwhelming majority(90% at least) would still
result in a contentious fork that punished both sides(in price, confidence,
adoption, and possibly chain or node attacks) for refusing to agree.

Nodes have absolutely no say in the matter if they can't segment the
network, and even if they could their impact could be repaired. Users !=
Nodes.

> What makes transactions irreversible

Err, this makes me worry that you don't understand how blockchains work...
This is because miners are severely punished for attempting to mine on
anything but the longest chain. Nodes have absolutely no say in the
matter, they always follow the longest chain unless a hardfork was
applied. If the hardfork has overwhelming consensus, i.e. stopping a 51%
attack, then the attack would be handled. If the hardfork did not have
overwhelming consensus it would result in another fork war requiring users,
businesses, and miners to actively decide which to support and how, and
once again would involve mutual destruction on both forks.

Nodes don't decide any of these things. Nodes follow the longest chain,
and have no practical choices in the matter. Users not running nodes
doesn't diminish their power - Mutual destruction comes from the market
forces on the exchanges, and they could give a rats ass whether you run a
node or not.

> The market is not storing 10s of billions of dollars in Bitcoin despite
all it's risks because it is useful for everyday transactions, that is a
solved problem in every part of the world (Cash/Visa/etc..).

This is just the "bitcoin is gold" argument. Bitcoin is not gold. For
someone not already a believer, Bitcoin is a risky, speculative investment
into a promising future technology, whereas gold is a stable physical asset
with 4,000 years of acceptance history that has the same value in nearly
every city on the planet. Bitcoin is difficult to purchase and difficult
to find someone to exchange for goods or services. Literally the only
reason we have 10s of billions of dollars of value is because speculation,
which includes nearly all Bitcoin users/holders and almost all businesses
and miners. While Bitcoin borrows useful features from gold, it has more
possible uses, including uses that were never possible before Bitcoin
existed, and we believe that gives it huge potential.

The ability of other systems to do transactions, like visa or cash, come
with the limitations of those systems. Bitcoin was designed to break those
limitations and STILL provide the ability to do transactions. We might all
agree Bitcoin isn't going to ever solve the microtransaction problem, at
least not on-chain, but saying Bitcoin doesn't need utility is just
foolish. Gold doesn't need utility, gold has 4,000 years of history. We
don't.

> Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger
picture, transaction capacity will still be too low for global usage in the
medium-long term.

Which is why it needs to be a formula or a continuous process, not a single
number.

> Even if it fails to live up to the hype, you should not discount the
market innovating solutions when there is money to be made.

That's like saying it would be better to do nothing so someone else solves
our problem for us than it would be for us to do what we can to solve it
ourselves. Someone else solving our problem may very well be Ethereum, and
"solving it for us" is pulling Bitcoin investments, users and nodes away
into Ethereum.

> The additional capacity from blocksize increases are linear improvements
with very large systemic costs compared with the userbase and usage which
is growing exponentially.

The capacity increases do not have to be linear. The increases in utility
are linear with blocksize increases, but so are the costs. There's no
reason those blocksize increases can't be tied to or related to usage
increases, so long as the concerns about having too few nodes (or too few
fees) for security are handled.



On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <***@hotmail.com> wrote:

>
> >> If home users are not running their own full nodes, then home users
> have to trust and rely on other, more powerful nodes to represent them. Of
> course, the more powerful nodes, simply by nature of having more power, are
> going to have different opinions and objectives from the users.
>
> >I think you're conflating mining with node operation here. Node users
> only power is to block the propagation of certain things. Since miners
> also have a node endpoint, they can cut the node users out of the equation
> by linking with eachother directly - something they already do out of
> practicality for propagation. Node users do not have the power to
> arbitrate consensus, that is why we have blocks and PoW.
>
> You are only looking at technical aspects and missing the political aspect.
>
> Node users decide what a Bitcoin is. It matters not how much hash power is
> behind a inflationary supply chain fork, full nodes protect the user from
> the change of any properties of Bitcoin which they do not agree with. The
> ability to retain this power for users is of prime importance and is
> arguably what gives Bitcoin most of it's value. Any increase in the cost to
> run a full node is an increase in cost to maintain monetary sovereignty.
> The ability for a user to run a node is what keeps the miners honest and
> prevents them from rewriting any of Bitcoin's rules.
>
> If it's still difficult to grasp the above paragraph, ask yourself the
> following questions,
> - What makes Bitcoin uncensorable
> - What gives confidence that the 21 million limit will be upheld
> - What makes transactions irreversible
> - If hashpower was king as you make it to be, why havn't miners making up
> majority hashrate who want bigger blocks been able to change the blocksize?
>
> The market is not storing 10s of billions of dollars in Bitcoin despite
> all it's risks because it is useful for everyday transactions, that is a
> solved problem in every part of the world (Cash/Visa/etc..).
>
> Having said that, i fully empathise with your view that increasing
> transaction fees might allow competitors to gain marketshare for low value
> use cases. By all means, we should look into ways of solving the problem.
> But all these debates around blocksize is a total waste of time. Even if we
> fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction
> capacity will still be too low for global usage in the medium-long term.
> The additional capacity from blocksize increases are linear improvements
> with very large systemic costs compared with the userbase and usage which
> is growing exponentially. Lightning potentially offers a couple or orders
> of magnitude of scaling and will make blocksize a non-issue for years to
> come. Even if it fails to live up to the hype, you should not discount the
> market innovating solutions when there is money to be made.
>
>
Luv Khemani via bitcoin-dev
2017-03-31 04:21:17 UTC
Permalink
Raw Message
> Nodes don't do politics. People do, and politics is a lot larger with a lot more moving parts than just node operation.


Node operation is making a stand on what money you will accept.

Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.

> Full nodes protect from nothing if the chain they attempt to use is nonfunctional.

This is highly subjective.
Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.

> This power is far more complicated than just nodes.

I never implied otherwise.

> You're implying that node operation == political participation.

Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.

> Miners being distributed in enough countries and locations to avoid any single outside attacker group from having enough leverage to prevent transaction inclusion, and miners also having enough incentives(philosophical or economic) to refuse to collude towards transaction exclusion.

It's good that you see the importance of this. You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.

> If users refused to get on board, exchanges would follow users. If miners refused to get on board, the attempt would be equally dead in the water. It would require a majority of users, businesses and miners to change the limit;

> Nodes have absolutely no say in the matter if they can't segment the network, and even if they could their impact could be repaired. Users != Nodes.

Nodes define which network they want to follow. Without a Node, you don't even get to decide which segement you are on. Either miners decide( for SPV wallets) or your wallet's server decides(Node). You have no control without a

>> What makes transactions irreversible
>Nodes have absolutely no say in the matter, they always follow the longest chain unless a hardfork was applied.

My bad here, hashpower decides order. This is the sole reason we have mining, to order transactions.

> Mutual destruction comes from the market forces on the exchanges, and they could give a rats ass whether you run a node or not.

Ability to run a node and validate rules => Confidence in currency => Higher demand => Higher exchange rate

I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored. Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.

> Literally the only reason we have 10s of billions of dollars of value is because speculation, which includes nearly all Bitcoin users/holders and almost all businesses and miners. While Bitcoin borrows useful features from gold, it has more possible uses, including uses that were never possible before Bitcoin existed, and we believe that gives it huge potential.
> The ability of other systems to do transactions, like visa or cash, come with the limitations of those systems. Bitcoin was designed to break those limitations and STILL provide the ability to do transactions. We might all agree Bitcoin isn't going to ever solve the microtransaction problem, at least not on-chain, but saying Bitcoin doesn't need utility is just foolish. Gold doesn't need utility, gold has 4,000 years of history. We don't.
> There's no reason those blocksize increases can't be tied to or related to usage increases

Blocksize has nothing to do with utility, only cost of on-chain transactions.
OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?

> That's like saying it would be better to do nothing so someone else solves our problem for us than it would be for us to do what we can to solve it ourselves. Someone else solving our problem may very well be Ethereum, and "solving it for us" is pulling Bitcoin investments, users and nodes away into Ethereum.

Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.




On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <***@hotmail.com<mailto:***@hotmail.com>> wrote:


>> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.

>I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.

You are only looking at technical aspects and missing the political aspect.

Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.

If it's still difficult to grasp the above paragraph, ask yourself the following questions,
- What makes Bitcoin uncensorable
- What gives confidence that the 21 million limit will be upheld
- What makes transactions irreversible
- If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?

The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).

Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
Jared Lee Richardson via bitcoin-dev
2017-03-31 05:28:33 UTC
Permalink
Raw Message
> Node operation is making a stand on what money you will accept.

> Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.

Err, no, that's what happens when you double click the Ethereum icon
instead of the Bitcoin icon. Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments. Your verification is just
different and the risks that come along with that are different. It's
only confusing if you make it confusing.

> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.

If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional. You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.

> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.

As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins. They send and receive
Bitcoins just like everyone else running Bitcoin software. There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone. It is just a different method of verification with
corresponding different costs of use and different security
guarantees.

> You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.

We're already fucked, China has 61% of the hashrate and the only thing
we can do about it is to wait for the Chinese electrical
supply/demand/transmission system to rebalance itself. Aside from
that little problem, mining distributions and pool distributions don't
significantly factor into the blocksize debate. The debate is a
choice between nodes paying more to allow greater growth and adoption,
or nodes constraining adoption in favor of debatable security
concerns.

> Nodes define which network they want to follow.

Do you really consider it choosing when there is only a single option?
And even if there was, the software would choose it for you? If it
is a Bitcoin client, it follows the Bitcoin blockchain. There is no
BU blockchain at the moment, and Bitcoin software can't possibly start
following Ethereum blockchains.

> Without a Node, you don't even get to decide which segement you are on.

Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea). You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.

> Ability to run a node and validate rules => Confidence in currency

This is only true for the small minority that actually need that added
level of security & confidence, and the paranoid people who believe
they need it when they really, really don't. Some guy on reddit
spouted off the same garbage logic, but was much quieter when I got
him to admit that he didn't actually read the code of Bitcoin that he
downloaded and ran, nor any of the code of the updates. He trusted.
*gasp*

The average person doesn't need that level of security. They do
however need to be able to use it, which they cannot right now if you
consider "average" to be at least 50% of the population.

> Higher demand => Higher exchange rate

Demand comes from usage and adoption. Neither can happen us being
willing to give other people the option to trade security features for
lower costs.

> I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored.

Great. Somehow I think Bitcoin's future involves very few more people
like you, and very many people who aren't paranoid and just want to be
able to send and receive Bitcoins.

> Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.

No, it has its value for many, many reasons, trustless properties is
only one of them. What I'm suggesting doesn't involve giving up
trustless properties except in your head (And not even then, since you
would almost certainly be able to afford to run a node for the rest of
your life if Bitcoin's value continues to rise as it has in the past).
And even if it did, there's a lot more reasons that a lot more people
than you would use it.

> Blocksize has nothing to do with utility, only cost of on-chain transactions.

Are you really this dense? If the cost of on-chain transactions
rises, numerous use cases get killed off. At $0.10 per tx you
probably won't buy in-game digital microtransactions with it, but you
might buy coffee with it. At $1 per tx, you probably won't buy coffee
with it but you might pay your ISP bill with it. At $20 per tx, you
probably won't pay your ISP bill with it, but you might pay your rent.
At $300 per tx you probably won't use it for anything, but a company
purchasing goods from China might. At $4000 per tx that company
probably won't use it, but international funds settlement for
million-dollar transactions might use it.

At each fee step along the way you kill of hundreds or thousands of
possible uses of Bitcoin. Killing those off means fewer people will
use it, so they will use something else instead.

> OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.

No they don't. They only give people the option to pay more for
higher security or to accept lower security and use Bitcoin anyway.

> Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?

So far as anyone has presented actual numbers, there's no reason to
believe larger blocksizes endanger anything of the sort, even if I
agreed that that was Bitcoin's primary proposition. And I don't
believe we need an insignificant capacity increase, I used to think
that way though. I strongly believe we can handle massive increases
by adjusting our expectations of what nodes do, how they operate, how
they justify the price of their services, and what levels of security
are available and appropriate for various levels of transaction risk.

> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.

Segwit is a miniscule blocksize increase and wholly inadequate
compared to the scope of the problem. Good for other reasons, though.
Lightning is not Bitcoin, it is something different(but not bad IMO)
that has different features and different consequences. I guess you
think it is ok that if your lightning node goes offline at the wrong
time, you could lose funds you never transacted with in the first
place? No? Oh, then you must be ok with lightning hub centralization
then as well as paying a monthly fee to lightning hubs for their
services. Wait, that sounds an awful lot like visa....

I have no idea what you're referring to with the pre-loaded wallets point.


On Thu, Mar 30, 2017 at 9:21 PM, Luv Khemani <***@hotmail.com> wrote:
>
> > Nodes don't do politics. People do, and politics is a lot larger with a lot more moving parts than just node operation.
>
>
> Node operation is making a stand on what money you will accept.
>
> Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
>
>
> > Full nodes protect from nothing if the chain they attempt to use is nonfunctional.
>
> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.
>
> > This power is far more complicated than just nodes.
>
> I never implied otherwise.
>
> > You're implying that node operation == political participation.
>
> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.
>
> > Miners being distributed in enough countries and locations to avoid any single outside attacker group from having enough leverage to prevent transaction inclusion, and miners also having enough incentives(philosophical or economic) to refuse to collude towards transaction exclusion.
>
> It's good that you see the importance of this. You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.
>
> > If users refused to get on board, exchanges would follow users. If miners refused to get on board, the attempt would be equally dead in the water. It would require a majority of users, businesses and miners to change the limit;
>
> > Nodes have absolutely no say in the matter if they can't segment the network, and even if they could their impact could be repaired. Users != Nodes.
>
> Nodes define which network they want to follow. Without a Node, you don't even get to decide which segement you are on. Either miners decide( for SPV wallets) or your wallet's server decides(Node). You have no control without a
>
> >> What makes transactions irreversible
> >Nodes have absolutely no say in the matter, they always follow the longest chain unless a hardfork was applied.
>
> My bad here, hashpower decides order. This is the sole reason we have mining, to order transactions.
>
> > Mutual destruction comes from the market forces on the exchanges, and they could give a rats ass whether you run a node or not.
>
> Ability to run a node and validate rules => Confidence in currency => Higher demand => Higher exchange rate
>
> I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored. Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.
>
> > Literally the only reason we have 10s of billions of dollars of value is because speculation, which includes nearly all Bitcoin users/holders and almost all businesses and miners. While Bitcoin borrows useful features from gold, it has more possible uses, including uses that were never possible before Bitcoin existed, and we believe that gives it huge potential.
> > The ability of other systems to do transactions, like visa or cash, come with the limitations of those systems. Bitcoin was designed to break those limitations and STILL provide the ability to do transactions. We might all agree Bitcoin isn't going to ever solve the microtransaction problem, at least not on-chain, but saying Bitcoin doesn't need utility is just foolish. Gold doesn't need utility, gold has 4,000 years of history. We don't.
> > There's no reason those blocksize increases can't be tied to or related to usage increases
>
> Blocksize has nothing to do with utility, only cost of on-chain transactions.
> OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
> Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?
>
> > That's like saying it would be better to do nothing so someone else solves our problem for us than it would be for us to do what we can to solve it ourselves. Someone else solving our problem may very well be Ethereum, and "solving it for us" is pulling Bitcoin investments, users and nodes away into Ethereum.
>
> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.
>
>
>
>
> On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <***@hotmail.com> wrote:
>>
>>
>> >> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.
>>
>> >I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.
>>
>> You are only looking at technical aspects and missing the political aspect.
>>
>> Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.
>>
>> If it's still difficult to grasp the above paragraph, ask yourself the following questions,
>> - What makes Bitcoin uncensorable
>> - What gives confidence that the 21 million limit will be upheld
>> - What makes transactions irreversible
>> - If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?
>>
>> The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).
>>
>> Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
>>
>
Luv Khemani via bitcoin-dev
2017-03-31 08:19:07 UTC
Permalink
Raw Message
> Err, no, that's what happens when you double click the Ethereum icon

instead of the Bitcoin icon. Just because you run "Bitcoin SPV"
instead of "Bitcoin Verify Everyone's Else's Crap" doesn't mean you're
somehow going to get Ethereum payments. Your verification is just
different and the risks that come along with that are different. It's
only confusing if you make it confusing.

This is false. You could get coins which don't even exist as long as a miner mined the invalid transaction.
Peter Todd has demonstrated this on mainstream SPV wallets,
https://www.linkedin.com/pulse/peter-todds-fraud-proofs-talk-mit-bitcoin-expo-2016-mark-morris

The only reason SPV wallets do not accept ethereum payments is because of transaction/block format differences.
SPV wallets have no clue what is a valid bitcoin, they trust miners fully.

In the event of a hardfork, SPV wallets will blindly follow the longest chain.

> If every block that is mined for them is deliberately empty because of
an attacker, that's nonfunctional. You can use whatever semantics you
want to describe that situation, but that's clearly what I meant.

Not sure why you are bringing this up, this is not the case today nor does it have anything to do with blocksize.

> As above, if someone operates Bitcoin in SPV mode they are not
magically at risk of getting Dashcoins. They send and receive
Bitcoins just like everyone else running Bitcoin software. There's no
confusion about it and it doesn't have anything to do with hashrates
of anyone.

As mentioned earlier, you are at risk of receiving made up money.
SPV has everything to do with hashrate, it trusts hashrate fully.
Crafting a bitcoin transaction paying you money that i do not have is not difficult, as long as a miner mines a block with it, your SPV wallet will accept it.

> The debate is a choice between nodes paying more to allow greater growth and adoption,
or nodes constraining adoption in favor of debatable security
concerns.

Onchain transactions are not the only way to use Bitcoin the currency.
Trades you do on an exchange are not onchain, yet transacted with Bitcoin.

> And even if there was, the software would choose it for you?

People choose the software, not the other way round.

> Yes you do, if the segment options are known (and if they aren't,
running a node likely won't help you choose either, it will choose by
accident and you'll have no idea). You would get to choose whose
verifications to request/check from, and thus choose which segment to
follow, if any.

SPV does not decide, they follow longest chain.
Centralised/Server based wallets follow the server they are connecting to.
Full Nodes do not depend on a 3rd party to decide if the money received is valid.

> Are you really this dense? If the cost of on-chain transactions
rises, numerous use cases get killed off. At $0.10 per tx you
probably won't buy in-game digital microtransactions with it, but you
might buy coffee with it. At $1 per tx, you probably won't buy coffee
with it but you might pay your ISP bill with it. At $20 per tx, you
probably won't pay your ISP bill with it, but you might pay your rent.
At $300 per tx you probably won't use it for anything, but a company
purchasing goods from China might. At $4000 per tx that company
probably won't use it, but international funds settlement for
million-dollar transactions might use it.
> At each fee step along the way you kill of hundreds or thousands of
possible uses of Bitcoin. Killing those off means fewer people will
use it, so they will use something else instead.

No need to get personal.
As mentioned earlier, all these low value transactions can happen offchain.
None of the use cases will be killed off. We have sub dollar trades happening on exchanges offchain.

> The average person doesn't need that level of security.

Precisely why they do not need to be on-chain.

It is clear to me that you have not yet grasped Bitcoin's security model, especially the role Full-Nodes play in it.
Id suggest you do some more reading up and thinking about it.
Do thought experiments and take it to the extremes where nobody runs a node, what can miners do now which they could not do before?
Why don't exchanges run SPV nodes?

Further correspondence will not be fruitful until you grasp this.



On Thu, Mar 30, 2017 at 9:21 PM, Luv Khemani <***@hotmail.com> wrote:
>
> > Nodes don't do politics. People do, and politics is a lot larger with a lot more moving parts than just node operation.
>
>
> Node operation is making a stand on what money you will accept.
>
> Ie Your local store will only accept US Dollars and not Japanese Yen. Without being able to run a node, you have no way to independently determine what you are receiving, you could be paid Zimbawe Dollars and wouldn't know any better.
>
>
> > Full nodes protect from nothing if the chain they attempt to use is nonfunctional.
>
> This is highly subjective.
> Just because it is nonfunctional to you, does not mean it is nonfunctional to existing users.
>
> > This power is far more complicated than just nodes.
>
> I never implied otherwise.
>
> > You're implying that node operation == political participation.
>
> Ofcourse it is. Try paying for my goods using BU/Ehtereum/Dash/etc.. or a Bitcoin forked with inflation, you will not get any goods regardless of how much hashrate those coins have.
>
> > Miners being distributed in enough countries and locations to avoid any single outside attacker group from having enough leverage to prevent transaction inclusion, and miners also having enough incentives(philosophical or economic) to refuse to collude towards transaction exclusion.
>
> It's good that you see the importance of this. You should also take into consideration the number of independent mining entities it takes to achieve 51% hashrate. It will be of little use to have thousands on independent miners/pools if 3 large pools make up 51% of hash rate and collude to attack the network.
>
> > If users refused to get on board, exchanges would follow users. If miners refused to get on board, the attempt would be equally dead in the water. It would require a majority of users, businesses and miners to change the limit;
>
> > Nodes have absolutely no say in the matter if they can't segment the network, and even if they could their impact could be repaired. Users != Nodes.
>
> Nodes define which network they want to follow. Without a Node, you don't even get to decide which segement you are on. Either miners decide( for SPV wallets) or your wallet's server decides(Node). You have no control without a
>
> >> What makes transactions irreversible
> >Nodes have absolutely no say in the matter, they always follow the longest chain unless a hardfork was applied.
>
> My bad here, hashpower decides order. This is the sole reason we have mining, to order transactions.
>
> > Mutual destruction comes from the market forces on the exchanges, and they could give a rats ass whether you run a node or not.
>
> Ability to run a node and validate rules => Confidence in currency => Higher demand => Higher exchange rate
>
> I would not be holding any Bitcoins if it was unfeasible for me to run a Node and instead had to trust some 3rd party that the currency was not being inflated/censored. Bitcoin has value because of it's trustless properties. Otherwise, there is no difference between cryptocurrencies and fiat.
>
> > Literally the only reason we have 10s of billions of dollars of value is because speculation, which includes nearly all Bitcoin users/holders and almost all businesses and miners. While Bitcoin borrows useful features from gold, it has more possible uses, including uses that were never possible before Bitcoin existed, and we believe that gives it huge potential.
> > The ability of other systems to do transactions, like visa or cash, come with the limitations of those systems. Bitcoin was designed to break those limitations and STILL provide the ability to do transactions. We might all agree Bitcoin isn't going to ever solve the microtransaction problem, at least not on-chain, but saying Bitcoin doesn't need utility is just foolish. Gold doesn't need utility, gold has 4,000 years of history. We don't.
> > There's no reason those blocksize increases can't be tied to or related to usage increases
>
> Blocksize has nothing to do with utility, only cost of on-chain transactions.
> OTOH increasing the blocksize has alot to do with introducing the very limitations that Visa/Cash have.
> Why would you risk destroying Bitcoin's primary proposition (removing limitations of Cash/Visa) for insignificant capacity increase?
>
> > That's like saying it would be better to do nothing so someone else solves our problem for us than it would be for us to do what we can to solve it ourselves. Someone else solving our problem may very well be Ethereum, and "solving it for us" is pulling Bitcoin investments, users and nodes away into Ethereum.
>
> Who says nothing is being done? Segwit, Lightning, pre-loaded wallets like Coinbase are all solutions.
>
>
>
>
> On Thu, Mar 30, 2017 at 12:11 AM, Luv Khemani <***@hotmail.com> wrote:
>>
>>
>> >> If home users are not running their own full nodes, then home users have to trust and rely on other, more powerful nodes to represent them. Of course, the more powerful nodes, simply by nature of having more power, are going to have different opinions and objectives from the users.
>>
>> >I think you're conflating mining with node operation here. Node users only power is to block the propagation of certain things. Since miners also have a node endpoint, they can cut the node users out of the equation by linking with eachother directly - something they already do out of practicality for propagation. Node users do not have the power to arbitrate consensus, that is why we have blocks and PoW.
>>
>> You are only looking at technical aspects and missing the political aspect.
>>
>> Node users decide what a Bitcoin is. It matters not how much hash power is behind a inflationary supply chain fork, full nodes protect the user from the change of any properties of Bitcoin which they do not agree with. The ability to retain this power for users is of prime importance and is arguably what gives Bitcoin most of it's value. Any increase in the cost to run a full node is an increase in cost to maintain monetary sovereignty. The ability for a user to run a node is what keeps the miners honest and prevents them from rewriting any of Bitcoin's rules.
>>
>> If it's still difficult to grasp the above paragraph, ask yourself the following questions,
>> - What makes Bitcoin uncensorable
>> - What gives confidence that the 21 million limit will be upheld
>> - What makes transactions irreversible
>> - If hashpower was king as you make it to be, why havn't miners making up majority hashrate who want bigger blocks been able to change the blocksize?
>>
>> The market is not storing 10s of billions of dollars in Bitcoin despite all it's risks because it is useful for everyday transactions, that is a solved problem in every part of the world (Cash/Visa/etc..).
>>
>> Having said that, i fully empathise with your view that increasing transaction fees might allow competitors to gain marketshare for low value use cases. By all means, we should look into ways of solving the problem. But all these debates around blocksize is a total waste of time. Even if we fork to 2MB, 5MB, 10MB. It is irrelevant in the larger picture, transaction capacity will still be too low for global usage in the medium-long term. The additional capacity from blocksize increases are linear improvements with very large systemic costs compared with the userbase and usage which is growing exponentially. Lightning potentially offers a couple or orders of magnitude of scaling and will make blocksize a non-issue for years to come. Even if it fails to live up to the hype, you should not discount the market innovating solutions when there is money to be made.
>>
>
David Vorick via bitcoin-dev
2017-03-31 16:14:42 UTC
Permalink
Raw Message
No one is suggesting anything like this. The cost of running a node
that could handle 300% of the 2015 worldwide nonbitcoin transaction
volume today would be a rounding error for most exchanges even if
prices didn't rise.


Then explain why PayPal has multiple datacenters. And why Visa has multiple
datacenters. And why the banking systems have multiple datacenters each.

I'm guessing it's because you need that much juice to run a global payment
system at the transaction volumes that they run at.

Unless you have professional experience working directly with transaction
processors handling tens of millions of financial transactions per day, I
think we can fully discount your assessment that it would be a rounding
error in the budget of a major exchange or Bitcoin processor to handle that
much load. And even if it was, it wouldn't matter because it's extremely
important to Bitcoin's security that it's everyday users are able to and
are actively running full nodes.

I'm not going to take the time to refute everything you've been saying but
I will say that most of your comments have demonstrated a similar level of
ignorance as the one above.

This whole thread has been absurdly low quality.
Jared Lee Richardson via bitcoin-dev
2017-03-31 16:46:10 UTC
Permalink
Raw Message
I guess I should caveat, a rounding error is a bit of exaggeration -
mostly because I previously assumed that it would take 14 years for
the network to reach such a level, something I didn't say and that you
might not grant me.

I don't know why paypal has multiple datacenters, but I'm guessing it
probably has a lot more to do with everything else they do -
interface, support, tax compliance, replication, redundancy - than it
does with the raw numbers of transaction volumes.

What I do know is the math, though. WW tx volume = 426,000,000,000 in
2015. Assuming tx size of ~500 bytes, that's 669 terabytes of data
per year. At a hard drive cost of 0.021 per GB, that's $36k a year or
so and declines ~14% a year.

The bandwidth is the really big cost. You are right that if this
hypothetical node also had to support historical syncing, the numbers
would probably be unmanagable. But that can be solved with a simple
checkpointing system for the vast majority of users, and nodes could
solve it by not supporting syncing / reducing peer count. With a peer
count of 25 I measured ~75 Gb/month with today's blocksize cap. That
works out to roughly 10 relays(sends+receives) per transaction
assuming all blocks were full, which was a pretty close approximation.
The bandwidth data of our 426 billion transactions per year works out
to 942 mbit/s. That's 310 Terabytes per month of bandwidth - At
today's high-volume price of 0.05 per GB, that's $18,500 a month or
$222,000 a year. Plus the $36k for storage per year brings it to
~$250k per year. Not a rounding error, but within the rough costs of
running an exchange - a team of 5 developers works out to ~$400-600k a
year, and the cost of compliance with EU and U.S. entities (including
lawyers) runs upwards of a million dollars a year. Then there's the
support department, probably ~$100-200k a year.

The reason I said a rounding error was that I assumed that it would
take until 2032 to reach that volume of transactions (Assuming
+80%/year of growth, which is our 4-year and 2-year historical average
tx/s growth). If hard drive prices decline by 14% per year, that cost
becomes $3,900 a year, and if bandwidth prices decline by 14% a year
that cost becomes $1800 a month($21,600 a year). Against a
multi-million dollar budget, even 3x that isn't a large concern,
though not, as I stated, a rounding error. My bad.

I didn't approximate for CPU usage, as I don't have any good estimates
for it, and I don't have significant reason to believe that it is a
higher cost than bandwidth, which seems to be the controlling cost
compared to adding CPU's.

> I'm not going to take the time to refute everything you've been saying

Care to respond to the math?

> This whole thread has been absurdly low quality.

Well, we agree on something at least.

On Fri, Mar 31, 2017 at 9:14 AM, David Vorick <***@gmail.com> wrote:
> No one is suggesting anything like this. The cost of running a node
> that could handle 300% of the 2015 worldwide nonbitcoin transaction
> volume today would be a rounding error for most exchanges even if
> prices didn't rise.
>
>
> Then explain why PayPal has multiple datacenters. And why Visa has multiple
> datacenters. And why the banking systems have multiple datacenters each.
>
> I'm guessing it's because you need that much juice to run a global payment
> system at the transaction volumes that they run at.
>
> Unless you have professional experience working directly with transaction
> processors handling tens of millions of financial transactions per day, I
> think we can fully discount your assessment that it would be a rounding
> error in the budget of a major exchange or Bitcoin processor to handle that
> much load. And even if it was, it wouldn't matter because it's extremely
> important to Bitcoin's security that it's everyday users are able to and are
> actively running full nodes.
>
> I'm not going to take the time to refute everything you've been saying but I
> will say that most of your comments have demonstrated a similar level of
> ignorance as the one above.
>
> This whole thread has been absurdly low quality.
David Vorick via bitcoin-dev
2017-03-31 18:23:00 UTC
Permalink
Raw Message
Sure, your math is pretty much entirely irrelevant because scaling systems
to massive sizes doesn't work that way.

At 400B transactions per year we're looking at block sizes of 4.5 GB, and a
database size of petabytes. How much RAM do you need to process blocks like
that? Can you fit that much RAM into a single machine? Okay, you can't fit
that much RAM into a single machine. So you have to rework the code to
operate on a computer cluster.

Already we've hit a significant problem. You aren't going to rewrite
Bitcoin to do block validation on a computer cluster overnight. Further,
are storage costs consistent when we're talking about setting up clusters?
Are bandwidth costs consistent when we're talking about setting up
clusters? Are RAM and CPU costs consistent when we're talking about setting
up clusters? No, they aren't. Clusters are a lot more expensive to set up
per-resource because they need to talk to eachother and synchronize with
eachother and you have a LOT more parts, so you have to build in
redundancies that aren't necessary in non-clusters.

Also worth pointing out that peak transaction volumes are typically 20-50x
the size of typical transaction volumes. So your cluster isn't going to
need to plan to handle 15k transactions per second, you're really looking
at more like 200k or even 500k transactions per second to handle
peak-volumes. And if it can't, you're still going to see full blocks.

You'd need a handful of experts just to maintain such a thing. Disks are
going to be failing every day when you are storing multiple PB, so you
can't just count a flat cost of $20/TB and expect that to work. You're
going to need redundancy and tolerance so that you don't lose the system
when a few of your hard drives all fail within minutes of eachother. And
you need a way to rebuild everything without taking the system offline.

This isn't even my area of expertise. I'm sure there are a dozen other
significant issues that one of the Visa architects could tell you about
when dealing with mission-critical data at this scale.

--------

Massive systems operate very differently and are much more costly per-unit
than tiny systems. Once we grow the blocksize large enough that a single
computer can't do all the processing all by itself we get into a world of
much harder, much more expensive scaling problems. Especially because we're
talking about a distributed system where the nodes don't even trust each
other. And transaction processing is largely non-parallel. You have to
check each transaction against each other transaction to make sure that
they aren't double spending eachother. This takes synchronization and
prevents 500 CPUs from all crunching the data concurrently. You have to be
a lot more clever than that to get things working and consistent.

When talking about scalability problems, you should ask yourself what other
systems in the world operate at the scales you are talking about. None of
them have cost structures in the 6 digit range, and I'd bet (without
actually knowing) that none of them have cost structures in the 7 digit
range either. In fact I know from working in a related industry that the
cost structures for the datacenters (plus the support engineers, plus the
software management, etc.) that do airline ticket processing are above $5
million per year for the larger airlines. Visa is probably even more
expensive than that (though I can only speculate).
Eric Voskuil via bitcoin-dev
2017-03-31 18:58:44 UTC
Permalink
Raw Message
As an independently verifiable, decentralized store of public information, the Bitcoin block tree and transaction DAG do have an advantage over systems such as Visa. The store is just a cache. There is no need to implement reliability in storage or in communications. It is sufficient to be able to detect invalidity. And even if a subset of nodes fail to do so, the system overall compensates.

As such the architecture of a Bitcoin node and its supporting hardware requirements are very different from an unverifiable, centralized store of private information. So in that sense the comparison below is not entirely fair. Many, if not most, of the high costs of a Visa datacenter do not apply because of Bitcoin's information architecture.

However, if the system cannot remain decentralized these architectural advantages will not hold. At that point your considerations below are entirely valid. Once the information is centralized it necessarily becomes private and fragile. Conversely, once it becomes private it necessarily becomes centralized and fragile. This fragility requires significant investment by the central authority to maintain.

So as has been said, we can have decentralization and its benefit of trustlessness or we can have Visa. We already have Visa. Making another is entirely uninteresting.

e

> On Mar 31, 2017, at 11:23 AM, David Vorick via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:
>
> Sure, your math is pretty much entirely irrelevant because scaling systems to massive sizes doesn't work that way.
>
> At 400B transactions per year we're looking at block sizes of 4.5 GB, and a database size of petabytes. How much RAM do you need to process blocks like that? Can you fit that much RAM into a single machine? Okay, you can't fit that much RAM into a single machine. So you have to rework the code to operate on a computer cluster.
>
> Already we've hit a significant problem. You aren't going to rewrite Bitcoin to do block validation on a computer cluster overnight. Further, are storage costs consistent when we're talking about setting up clusters? Are bandwidth costs consistent when we're talking about setting up clusters? Are RAM and CPU costs consistent when we're talking about setting up clusters? No, they aren't. Clusters are a lot more expensive to set up per-resource because they need to talk to eachother and synchronize with eachother and you have a LOT more parts, so you have to build in redundancies that aren't necessary in non-clusters.
>
> Also worth pointing out that peak transaction volumes are typically 20-50x the size of typical transaction volumes. So your cluster isn't going to need to plan to handle 15k transactions per second, you're really looking at more like 200k or even 500k transactions per second to handle peak-volumes. And if it can't, you're still going to see full blocks.
>
> You'd need a handful of experts just to maintain such a thing. Disks are going to be failing every day when you are storing multiple PB, so you can't just count a flat cost of $20/TB and expect that to work. You're going to need redundancy and tolerance so that you don't lose the system when a few of your hard drives all fail within minutes of eachother. And you need a way to rebuild everything without taking the system offline.
>
> This isn't even my area of expertise. I'm sure there are a dozen other significant issues that one of the Visa architects could tell you about when dealing with mission-critical data at this scale.
>
> --------
>
> Massive systems operate very differently and are much more costly per-unit than tiny systems. Once we grow the blocksize large enough that a single computer can't do all the processing all by itself we get into a world of much harder, much more expensive scaling problems. Especially because we're talking about a distributed system where the nodes don't even trust each other. And transaction processing is largely non-parallel. You have to check each transaction against each other transaction to make sure that they aren't double spending eachother. This takes synchronization and prevents 500 CPUs from all crunching the data concurrently. You have to be a lot more clever than that to get things working and consistent.
>
> When talking about scalability problems, you should ask yourself what other systems in the world operate at the scales you are talking about. None of them have cost structures in the 6 digit range, and I'd bet (without actually knowing) that none of them have cost structures in the 7 digit range either. In fact I know from working in a related industry that the cost structures for the datacenters (plus the support engineers, plus the software management, etc.) that do airline ticket processing are above $5 million per year for the larger airlines. Visa is probably even more expensive than that (though I can only speculate).
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Jared Lee Richardson via bitcoin-dev
2017-04-01 06:15:09 UTC
Permalink
Raw Message
> So your cluster isn't going to need to plan to handle 15k transactions per second, you're really looking at more like 200k or even 500k transactions per second to handle peak-volumes. And if it can't, you're still going to see full blocks.

When I first began to enter the blocksize debate slime-trap that we
have all found ourselves in, I had the same line of reasoning that you
have now. It is clearly untenable that blockchains are an incredibly
inefficient and poorly designed system for massive scales of
transactions, as I'm sure you would agree. Therefore, I felt it was
an important point for people to accept this reality now and stop
trying to use Blockchains for things they weren't good for, as much
for their own good as anyone elses. I backed this by calculating some
miner fee requirements as well as the very issue you raised. A few
people argued with me rationally, and gradually I was forced to look
at a different question: Granted that we cannot fit all desired
transactions on a blockchain, how many CAN we effectively fit?

It took another month before I actually changed my mind. What changed
it was when I tried to make estimations, assuming all the reasonable
trends I could find held, about future transaction fees and future
node costs. Did they need to go up exponentially? How fast, what
would we be dealing with in the future? After seeing the huge
divergence in node operational costs without size increases($3 vs
$3000 after some number of years stands out in my memory), I tried to
adjust various things, until I started comparing the costs in BTC
terms. I eventually realized that comparing node operational costs in
BTC per unit time versus transaction costs in dollars revealed that
node operational costs per unit time could decrease without causing
transaction fees to rise. The transaction fees still had to hit $1 or
$2, sometimes $4, to remain a viable protection, but otherwise they
could become stable around those points and node operational costs per
unit time still decreased.

None of that may mean anything to you, so you may ignore it all if you
like, but my point in all of that is that I once used similar logic,
but any disagreements we may have does not mean I magically think as
you implied above. Some people think blockchains should fit any
transaction of any size, and I'm sure you and I would both agree
that's ridiculous. Blocks will nearly always be full in the future.
There is no need to attempt to handle unusual volume increases - The
fee markets will balance it and the use-cases that can barely afford
to fit on-chain will simply have to wait for awhile. The question is
not "can we handle all traffic," it is "how many use-cases can we
enable without sacrificing our most essential features?" (And for
that matter, what is each essential feature, and what is it worth?)

There are many distinct cut-off points that we could consider. On the
extreme end, Raspberry Pi's and toasters are out. Data-bound mobile
phones are out for at least the next few years if ever. Currently the
concern is around home user bandwidth limits. The next limit after
that may either be the CPU, memory, or bandwidth of a single top-end
PC. The limit after that may be the highest dataspeeds that large,
remote Bitcoin mining facilities are able to afford, but after fees
rise and a few years, they may remove that limit for us. Then the
next limit might be on the maximum amount of memory available within a
single datacenter server.

At each limit we consider, we have a choice of killing off a number of
on-chain usecases versus the cost of losing the nodes who can't reach
the next limit effectively. I have my inclinations about where the
limits would be best set, but the reality is I don't know the numbers
on the vulnerability and security risks associated with various node
distributions. I'd really like to, because if I did I could begin
evaluating the costs on each side.

> How much RAM do you need to process blocks like that?

That's a good question, and one I don't have a good handle on. How
does Bitcoin's current memory usage scale? It can't be based on the
UTXO, which is 1.7 GB while my node is only using ~450mb of ram. How
does ram consumption increase with a large block versus small ones?
Are there trade-offs that can be made to write to disk if ram usage
grew too large?

If that proved to be a prohibitively large growth number, that becomes
a worthwhile number to consider for scaling. Of note, you can
currently buy EC2 instances with 256gb of ram easily, and in 14 years
that will be even higher.

> So you have to rework the code to operate on a computer cluster.

I believe this is exactly the kind of discussion we should be having
14 years before it might be needed. Also, this wouldn't be unique -
Some software I have used in the past (graphite metric collection)
came pre-packaged with the ability to scale out to multiple machines
split loads and replicate the data, and so could future node software.

> Further, are storage costs consistent when we're talking about setting up clusters? Are bandwidth costs consistent when we're talking about setting up clusters? Are RAM and CPU costs consistent when we're talking about setting up clusters? No, they aren't.

Bandwidth costs are, as intra-datacenter bandwidth is generally free.
The other ones warrant evaluation for the distant future. I would
expect that CPU resources is the first thing we would have to change -
13 thousand transactions per second is an awful lot to process. I'm
not intimately familiar with the processing - Isn't it largely
signature verification of the transaction itself, plus a minority of
time spent checking and updating utxo values, and finally a small
number of hashes to check block validity? If signature verification
was controlling, a specialized asic chip(on a plug-in card) might be
able to verify signatures hundreds of times faster, and it could even
be on a cheap 130nm chipset like the first asic miners rushed to
market. Point being, there are options and it may warrant looking
into after the risk to node reductions.

> You'd need a handful of experts just to maintain such a thing.

I don't think this is as big a deal as it first might seem. The
software would already come written to be spanned onto multiple
machines - it just needs to be configured. For the specific question
at hand, the exchange would already have IT staff and datacenter
capacity/operations for their other operations. In the more general
case, the numbers involved don't work out to extreme concerns at that
level. The highest cpu usage I've observed on my nodes is less than
5%, less than 1% for the time I just checked, handling ~3 tx/s. So
being conservative, if it hits 100% on one core at 60-120 tx/s, that
works out to ~25-50 8-core machines. But again, that's a 2-year old
laptop CPU and we're talking about 14 years into the future. Even if
it was 25 machines, that's the kind of operation a one or two man IT
team just runs on the side with their extra duties. It isn't enough
to hire a fulltime tech for.

> Disks are going to be failing every day when you are storing multiple PB, so you can't just count a flat cost of $20/TB and expect that to work.

I mean, that's literally what Amazon does for you with S3, which was
even cheaper than the EBS datastore pricing I was looking at. So....
Even disregarding that, raid operation was a solved thing more than 10
years ago, and hard drives 14 years out would be roughly ~110 TB for a
$240 hard drive at a 14%/year growth rate. In 2034 the blockchain
would fit on 10 of those. Not exactly a "failing every day" kind of
problem. By 2040, you'd need *gasp* 22 $240 hard drives. I mean, it
is a lot, but not a lot like you're implying.

> And you need a way to rebuild everything without taking the system offline.

That depends heavily upon the tradeoffs the businesses can make. I
don't think node operation at an exchange is a five-nines uptime
operation. They could probably tolerate 3 nines. The worst that
happens is occasionally people's withdrawals and deposit are delayed
slightly. It won't shut down trading.

> I'm sure there are a dozen other significant issues that one of the Visa architects could tell you about when dealing with mission-critical data at this scale.

Visa stores the only copy. They can't afford to lose the data.
Bitcoin isn't like that, as others pointed out. And for most
businesses, if their node must be rebooted periodically, it isn't a
huge deal.

> Once we grow the blocksize large enough that a single computer can't do all the processing all by itself we get into a world of much harder, much more expensive scaling problems.

Ok, when is that point, and what is the tradeoff in terms of nodes?
Just because something is hard doesn't mean it isn't worth doing.
That's just a defeatist attitude. How big can we get, for what
tradeoffs, and what do we need to do to get there?

> You have to check each transaction against each other transaction to make sure that they aren't double spending eachother.

This is really not that hard. Have a central database, update/check
the utxo values in block-store increments. If a utxo has already been
used this increment, the block is invalid. If the database somehow
got too big(not going to happen at these scales, but if it did), it
can be sharded trivially on the transaction information. These are
solved problems, the free database software that's available is pretty
powerful.

> You have to be a lot more clever than that to get things working and consistent.

NO, NOT CLEVER. WE CAN'T DO THAT.

Sorry, I had to. :)

> None of them have cost structures in the 6 digit range, and I'd bet (without actually knowing) that none of them have cost structures in the 7 digit range either.

I know of and have experience working with systems that handled
several orders of magnitude more data than this. None of the issues
brought up above are problems that someone hasn't solved. Transaction
commitments to databases? Data consistency across multiple workers?
Data storage measured in exabytes? Data storage and updates
approaching hundreds of millions of datapoints per second? These
things are done every single day at numerous companies.

On Fri, Mar 31, 2017 at 11:23 AM, David Vorick <***@gmail.com> wrote:
> Sure, your math is pretty much entirely irrelevant because scaling systems
> to massive sizes doesn't work that way.
>
> At 400B transactions per year we're looking at block sizes of 4.5 GB, and a
> database size of petabytes. How much RAM do you need to process blocks like
> that? Can you fit that much RAM into a single machine? Okay, you can't fit
> that much RAM into a single machine. So you have to rework the code to
> operate on a computer cluster.
>
> Already we've hit a significant problem. You aren't going to rewrite Bitcoin
> to do block validation on a computer cluster overnight. Further, are storage
> costs consistent when we're talking about setting up clusters? Are bandwidth
> costs consistent when we're talking about setting up clusters? Are RAM and
> CPU costs consistent when we're talking about setting up clusters? No, they
> aren't. Clusters are a lot more expensive to set up per-resource because
> they need to talk to eachother and synchronize with eachother and you have a
> LOT more parts, so you have to build in redundancies that aren't necessary
> in non-clusters.
>
> Also worth pointing out that peak transaction volumes are typically 20-50x
> the size of typical transaction volumes. So your cluster isn't going to need
> to plan to handle 15k transactions per second, you're really looking at more
> like 200k or even 500k transactions per second to handle peak-volumes. And
> if it can't, you're still going to see full blocks.
>
> You'd need a handful of experts just to maintain such a thing. Disks are
> going to be failing every day when you are storing multiple PB, so you can't
> just count a flat cost of $20/TB and expect that to work. You're going to
> need redundancy and tolerance so that you don't lose the system when a few
> of your hard drives all fail within minutes of eachother. And you need a way
> to rebuild everything without taking the system offline.
>
> This isn't even my area of expertise. I'm sure there are a dozen other
> significant issues that one of the Visa architects could tell you about when
> dealing with mission-critical data at this scale.
>
> --------
>
> Massive systems operate very differently and are much more costly per-unit
> than tiny systems. Once we grow the blocksize large enough that a single
> computer can't do all the processing all by itself we get into a world of
> much harder, much more expensive scaling problems. Especially because we're
> talking about a distributed system where the nodes don't even trust each
> other. And transaction processing is largely non-parallel. You have to check
> each transaction against each other transaction to make sure that they
> aren't double spending eachother. This takes synchronization and prevents
> 500 CPUs from all crunching the data concurrently. You have to be a lot more
> clever than that to get things working and consistent.
>
> When talking about scalability problems, you should ask yourself what other
> systems in the world operate at the scales you are talking about. None of
> them have cost structures in the 6 digit range, and I'd bet (without
> actually knowing) that none of them have cost structures in the 7 digit
> range either. In fact I know from working in a related industry that the
> cost structures for the datacenters (plus the support engineers, plus the
> software management, etc.) that do airline ticket processing are above $5
> million per year for the larger airlines. Visa is probably even more
> expensive than that (though I can only speculate).
Peter R via bitcoin-dev
2017-03-29 20:28:29 UTC
Permalink
Raw Message
I believe nearly everyone at Bitcoin Unlimited would be supportive of a UTXO check-pointing scheme. I’d love to see this happen, as it would greatly reduce the time needed to get a new node up-and-running, for node operators who are comfortable trusting these commitments.

I’m confident that we could work with the miners who we have good relationships with to start including the root hash of the (lagging) UTXO set in their coinbase transactions, in order to begin transforming this idea into reality. We could also issue regular transactions from “semi-trusted” addresses controlled by known people that include the same root hash in an OP_RETURN output, which would allow cross-checking against the miners’ UTXO commitments, as part of this initial “prototype” system.

This would "get the ball rolling" on UTXO commitments in a permissionless way (no one can stop us from doing this). If the results from this prototype commitment scheme were positive, then perhaps there would be support from the community and miners to enforce a new rule which requires the (lagging) root hashes be included in new blocks. At that point, the UTXO commitment scheme is no longer a prototype but a trusted feature of the Bitcoin network.

On that topic, are there any existing proposals detailing a canonical ordering of the UTXO set and a scheme to calculate the root hash?

Best regards,
Peter


> On Mar 29, 2017, at 12:33 PM, Daniele Pinna via bitcoin-dev <bitcoin-***@lists.linuxfoundation.org> wrote:
>
> What about periodically committing the entire UTXO set to a special checkpoint block which becomes the new de facto Genesis block?
>
> Daniele
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 Mar 2017 16:41:29 +0000
> From: Andrew Johnson <***@gmail.com <mailto:***@gmail.com>>
> To: David Vorick <***@gmail.com <mailto:***@gmail.com>>
> Cc: Bitcoin Dev <bitcoin-***@lists.linuxfoundation.org <mailto:bitcoin-***@lists.linuxfoundation.org>>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> Message-ID:
> <CAAy62_+JtoAuM-RsrAAp5eiGiO+***@mail.gmail.com <mailto:CAAy62_%2BJtoAuM-RsrAAp5eiGiO%***@mail.gmail.com>>
> Content-Type: text/plain; charset="utf-8"
>
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a loss
> as to how to empirically prove it.
>
> I do see your point on increasing load on archival nodes, but the majority
> of that load is going to come from new nodes coming online, they're the
> only ones going after very old blocks. I could see that as a potential
> attack vector, overwhelm the archival nodes by spinning up new nodes
> constantly, therefore making it difficult for a "real" new node to get up
> to speed in a reasonable amount of time.
>
> Perhaps the answer there would be a way to pay an archival node a small
> amount of bitcoin in order to retrieve blocks older than a certain cutoff?
> Include an IP address for the node asking for the data as metadata in the
> transaction... Archival nodes could set and publish their own policy, let
> the market decide what those older blocks are worth. Would also help to
> incentivize running archival node, which we do need. Of course, this isn't
> very user friendly.
>
> We can take this to bitcoin-discuss, if we're getting too far off topic.
>
>
> On Wed, Mar 29, 2017 at 11:25 AM David Vorick <***@gmail.com <mailto:***@gmail.com>>
> wrote:
>
> >
> > On Mar 29, 2017 12:20 PM, "Andrew Johnson" <***@gmail.com <mailto:***@gmail.com>>
> > wrote:
> >
> > What's stopping these users from running a pruned node? Not every node
> > needs to store a complete copy of the blockchain.
> >
> >
> > Pruned nodes are not the default configuration, if it was the default
> > configuration then I think you would see far more users running a pruned
> > node.
> >
> > But that would also substantially increase the burden on archive nodes.
> >
> >
> > Further discussion about disk space requirements should be taken to
> > another thread.
> >
> >
> > --
> Andrew Johnson
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170329/9b48ebe3/attachment.html <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170329/9b48ebe3/attachment.html>>
>
> ------------------------------
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Jared Lee Richardson via bitcoin-dev
2017-03-29 22:17:40 UTC
Permalink
Raw Message
> I’m confident that we could work with the miners who we have good
relationships with to start including the root hash of the (lagging) UTXO
set in their coinbase transactions, in order to begin transforming this
idea into reality.

By itself, this wouldn't work without a way for a new node to differentiate
between a false history and a true one.

> We could also issue regular transactions from “semi-trusted” addresses
controlled by known people that include the same root hash in an OP_RETURN
output, which would allow cross-checking against the miners’ UTXO
commitments, as part of this initial “prototype”

This might work, but I fail to understand how a new node could verify an
address / transaction without a blockchain to back it. Even if it could,
it becomes dependent upon those addresses not being compromised, and the
owners of those addresses would become targets for potential government
operations.

Having the software silently attempt to resolve the problem is risky unless
it is foolproof. Otherwise, users will assume their software is showing
them the correct history/numbers implicitly, and if the change the utxo
attacker made was small, the users might be able to follow the main chain
totally until it was too late and the attacker struck with an address that
otherwise never transacted. Sudden, bizarre, hard to debug fork and
potentially double spend against people who picked up the fraudulent utxo.

Users already treat wallet software with some level of suspicion, asking if
they can trust x or y or z, or like the portion of the BU community
convinced that core has been compromised by blockstream bigwigs. Signed
releases could provide the same thing but would encourage both open-source
security checks of the signed utxo's and potentially of users to check
download signatures.

Either approach is better than what we have now though, so I'd support
anything.

On Wed, Mar 29, 2017 at 1:28 PM, Peter R via bitcoin-dev <
bitcoin-***@lists.linuxfoundation.org> wrote:

> I believe nearly everyone at Bitcoin Unlimited would be supportive of a
> UTXO check-pointing scheme. I’d love to see this happen, as it would
> greatly reduce the time needed to get a new node up-and-running, for node
> operators who are comfortable trusting these commitments.
>
> I’m confident that we could work with the miners who we have good
> relationships with to start including the root hash of the (lagging) UTXO
> set in their coinbase transactions, in order to begin transforming this
> idea into reality. We could also issue regular transactions from
> “semi-trusted” addresses controlled by known people that include the same
> root hash in an OP_RETURN output, which would allow cross-checking against
> the miners’ UTXO commitments, as part of this initial “prototype” system.
>
> This would "get the ball rolling" on UTXO commitments in a permissionless
> way (no one can stop us from doing this). If the results from this
> prototype commitment scheme were positive, then perhaps there would be
> support from the community and miners to enforce a new rule which requires
> the (lagging) root hashes be included in new blocks. At that point, the
> UTXO commitment scheme is no longer a prototype but a trusted feature of
> the Bitcoin network.
>
> On that topic, are there any existing proposals detailing a canonical
> ordering of the UTXO set and a scheme to calculate the root hash?
>
> Best regards,
> Peter
>
>
> On Mar 29, 2017, at 12:33 PM, Daniele Pinna via bitcoin-dev <
> bitcoin-***@lists.linuxfoundation.org> wrote:
>
> What about periodically committing the entire UTXO set to a special
> checkpoint block which becomes the new de facto Genesis block?
>
> Daniele
>
> ------------------------------
>
> Message: 5
> Date: Wed, 29 Mar 2017 16:41:29 +0000
> From: Andrew Johnson <***@gmail.com>
> To: David Vorick <***@gmail.com>
> Cc: Bitcoin Dev <bitcoin-***@lists.linuxfoundation.org>
> Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
> Message-ID:
> <CAAy62_+JtoAuM-RsrAAp5eiGiO+***@mail.gm
> ail.com>
> Content-Type: text/plain; charset="utf-8"
>
> I believe that as we continue to add users to the system by scaling
> capacity that we will see more new nodes appear, but I'm at a bit of a loss
> as to how to empirically prove it.
>
> I do see your point on increasing load on archival nodes, but the majority
> of that load is going to come from new nodes coming online, they're the
> only ones going after very old blocks. I could see that as a potential
> attack vector, overwhelm the archival nodes by spinning up new nodes
> constantly, therefore making it difficult for a "real" new node to get up
> to speed in a reasonable amount of time.
>
> Perhaps the answer there would be a way to pay an archival node a small
> amount of bitcoin in order to retrieve blocks older than a certain cutoff?
> Include an IP address for the node asking for the data as metadata in the
> transaction... Archival nodes could set and publish their own policy, let
> the market decide what those older blocks are worth. Would also help to
> incentivize running archival node, which we do need. Of course, this isn't
> very user friendly.
>
> We can take this to bitcoin-discuss, if we're getting too far off topic.
>
>
> On Wed, Mar 29, 2017 at 11:25 AM David Vorick <***@gmail.com>
> wrote:
>
> >
> > On Mar 29, 2017 12:20 PM, "Andrew Johnson" <***@gmail.com>
> > wrote:
> >
> > What's stopping these users from running a pruned node? Not every node
> > needs to store a complete copy of the blockchain.
> >
> >
> > Pruned nodes are not the default configuration, if it was the default
> > configuration then I think you would see far more users running a pruned
> > node.
> >
> > But that would also substantially increase the burden on archive nodes.
> >
> >
> > Further discussion about disk space requirements should be taken to
> > another thread.
> >
> >
> > --
> Andrew Johnson
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/atta
> chments/20170329/9b48ebe3/attachment.html>
>
> ------------------------------
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
>
> _______________________________________________
> bitcoin-dev mailing list
> bitcoin-***@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
Rodney Morris via bitcoin-dev
2017-03-31 21:23:01 UTC
Permalink
Raw Message
You guessed wrong. Multiple data centres are as much about redundancy and
resiliency, and latency.

As for the cost, data centre space, business grade communication lines, and
staff are orders of magnitude more expensive than the physical hardware
they support.

I'd like to call you out on your continuing reduction to absurdity and
slippery slope arguments. Just because we can't handle 4GB blocks today,
doesn't mean we shouldn't aim in that direction. Doesn't mean we shouldn't
be taking our first second and third baby steps in that direction.

If the obsession with every personal computer being able to run a fill node
continues then bitcoin will be consigned to the dustbin of history, a
footnote to the story of the global crypto currency that eventually took
over the world.

Thanks
Rodney


Date: Fri, 31 Mar 2017 12:14:42 -0400
From: David Vorick <***@gmail.com>
To: Jared Lee Richardson <***@gmail.com>
Cc: Bitcoin Dev <bitcoin-***@lists.linuxfoundation.org>
Subject: Re: [bitcoin-dev] Hard fork proposal from last week's meeting
Message-ID:
<***@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"


Then explain why PayPal has multiple datacenters. And why Visa has multiple
datacenters. And why the banking systems have multiple datacenters each.

I'm guessing it's because you need that much juice to run a global payment
system at the transaction volumes that they run at.



Unless you have professional experience working directly with transaction
processors handling tens of millions of financial transactions per day, I
think we can fully discount your assessment that it would be a rounding
error in the budget of a major exchange or Bitcoin processor to handle that
much load. And even if it was, it wouldn't matter because it's extremely
important to Bitcoin's security that it's everyday users are able to and
are actively running full nodes.

I'm not going to take the time to refute everything you've been saying but
I will say that most of your comments have demonstrated a similar level of
ignorance as the one above.

This whole thread has been absurdly low quality.
Eric Voskuil via bitcoin-dev
2017-03-31 23:13:09 UTC
Permalink
Raw Message
On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,

The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.

e
Rodney Morris via bitcoin-dev
2017-04-01 01:41:58 UTC
Permalink
Raw Message
I didn't say typical, I said every. Currently a raspberry pi on shitty adsl
can run a full node. What's wrong with needing a high end pc and good
connectivity to run a full node?

People that want to, can. People that don't want to, won't, no matter how
low spec the machine you need.

If nobody uses bitcoin, all the security in the world provides no value.
The value of bitcoin is provided by people using bitcoin, and people will
only use bitcoin if it provides value to them. Security is one aspect
only. And the failure to understand that is what has led to the block size
debate.

Rodney

On 1 Apr 2017 10:12, "Eric Voskuil" <***@voskuil.org> wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,

The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.

e
Natanael via bitcoin-dev
2017-04-01 13:26:35 UTC
Permalink
Raw Message
Den 1 apr. 2017 01:13 skrev "Eric Voskuil via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org>:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/31/2017 02:23 PM, Rodney Morris via bitcoin-dev wrote:
> If the obsession with every personal computer being able to run a
> fill node continues then bitcoin will be consigned to the dustbin
> of history,

The cause of the block size debate is the failure to understand the
Bitcoin security model. This failure is perfectly exemplified by the
above statement. If a typical personal computer cannot run a node
there is no security.


If you're capable of running and trusting your own node chances are you
already have something better than a typical personal computer!

And those who don't have it themselves likely know where they can run or
access a node they can trust.

If you're expecting average joe to trust the likely not updated node on his
old unpatched computer full of viruses, you're going to have a bad time.

The real solution is to find ways to reduce the required trust in a
practical manner.

Using lightweight clients with multiple servers have already been
mentioned, Zero-knowledge proofs (if the can be made practical and stay
secure...) is another obvious future tool, and hardware wallets helps
against malware.

If you truly want everybody to run their own full nodes, the only plausible
solution is managed hardware in the style of Chromebooks, except that you
could pick your own distribution and software repository. Meaning you're
still trusting the exact same people whose nodes you would otherwise rely
on, except now you're mirroring their nodes on your own hardware instead.
Which at most improves auditability.
Eric Voskuil via bitcoin-dev
2017-04-01 07:41:46 UTC
Permalink
Raw Message
On 03/31/2017 11:18 PM, Jared Lee Richardson wrote:
>> If a typical personal computer cannot run a node there is no
>> security.
>
> If you can't describe an attack that is made possible when typical
> personal computers can't run nodes, this kind of logic has no place
> in this discussion.

"Governments are good at cutting off the heads of a centrally
controlled networks..."

e
Natanael via bitcoin-dev
2017-04-01 14:45:41 UTC
Permalink
Raw Message
Den 1 apr. 2017 16:35 skrev "Eric Voskuil via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org>:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/31/2017 11:18 PM, Jared Lee Richardson wrote:
>> If a typical personal computer cannot run a node there is no
>> security.
>
> If you can't describe an attack that is made possible when typical
> personal computers can't run nodes, this kind of logic has no place
> in this discussion.

"Governments are good at cutting off the heads of a centrally
controlled networks..."


That's what's so great about Bitcoin. The blockchain is the same
everywhere.

So if you can connect to private peers in several jurisdictions, chances
are they won't all be lying to you in the exact same way. Which is what
they would need to do to fool you.

If you run your own and can't protect it, they'll just hack your node and
make it lie to you.
Jared Lee Richardson via bitcoin-dev
2017-04-01 18:42:50 UTC
Permalink
Raw Message
That's a quoted general statement that is highly subjective, not a
description of an attack. If you can't articulate a specific attack vector
that we're defending against, such a defense has no value.

On Apr 1, 2017 12:41 AM, "Eric Voskuil" <***@voskuil.org> wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 03/31/2017 11:18 PM, Jared Lee Richardson wrote:
>> If a typical personal computer cannot run a node there is no
>> security.
>
> If you can't describe an attack that is made possible when typical
> personal computers can't run nodes, this kind of logic has no place
> in this discussion.

"Governments are good at cutting off the heads of a centrally
controlled networks..."

e
Loading...