Discussion:
[Bitcoin-development] Handling miner adoption gracefully for embedded consensus systems via double-spending/replace-by-fee
Peter Todd
2014-03-22 08:47:02 UTC
Permalink
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release. Secondly I noticed a
overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
into account, making it possible to broadcast unminable transactions and
bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
outputs given that the sigops limit and the way they use up a fixed 20
sigops per op makes them hard to do fee calculations for. They also make
it easy to bloat the UTXO set, potentially a bad thing. This would of
course require things using them to change. Currently that's just
Counterparty, so I gave them the heads up in my email.

To make a long story short, it was soon suggested that Bitcoin Core be
forked - the software, not the protocol - and miners encouraged to
support it. This isn't actually as trivial as it sounds, as you need to
add some anti-DoS stuff to deal with the fact that the hashing power
mining the transations you are relaying may be quite small. The second
issue is you need to add preferential peering, so the nodes in the
network with a broader idea of what is a "allowed" transaction can find
each other. (likely with a new service flag) It'd be a good time to
implement replace-by-fee, partly for its anti-DoS properties.

Which leaves us with a practical question: How do you gracefully handle
a switchover? First of all I suggest that proof-of-publication
applications adopt format flexibility, similar to how Mastercoin can
encode its data in pay-to-pubkeyhash, bare multisig, or op_return
outputs. Given the possibility of bare multisig going away, I'd suggest
that P2SH multisig scriptSig encoding be added as well. Note that a
really good implementation of all this is actually format agnostic, and
will let PUSHDATA's used for encoding data be specified arbitrarily. I
wrote up some code to do so awhile back as an experiment. It used the
LSB's of the nValue field in the txouts to specify what was and wasn't
data, along with some stenographic encryption of data and nValue. I'd be
happy to dig that up if anyone is interested.

All these methods have some overhead compared to just using OP_RETURN
and thus cost more. So I suggest you have your software simultaneously
double-spend the inputs to any proof-of-publication transaction with a
second transaction that just makes use of efficient OP_RETURN. That
second one can go to more enlightened miners. Only one or the other will
get mined of course and the cost to publish data will be proportional to
the relative % of hashing power in the two camps.

Finally I'll be writing something more detailed soon about why
proof-of-publication is essential and miners would be smart to support
it. But the tl;dr: of it is if you need proof-of-publication for what
your system does you're much more secure if you're embedded within
Bitcoin rather than alongside of it. There's a lot of very bad advise
getting thrown around lately for things like Mastercoin, Counterparty,
and for that matter, Colored Coins, to use a separate PoW blockchain or
a merge-mined one. The fact is if you go with pure PoW, you risk getting
attacked while your still growing, and if you go for merge-mined PoW,
the attacker can do so for free. We've got a real-world example of the
former with Twister, among many others, usually resulting in a switch to
a centralized checkpointing scheme. For the latter we have Coiledcoin,
an alt that made the mistake of using SHA256 merge-mining and got killed
off early at birth with a zero-cost 51% attack. There is of course a
censorship risk to going the embedded route, but at least we know that
for the forseeable future doing so will require explicit blacklists,
something most people here are against.

To MSC, XCP and others: Now I'm not going to say you shouldn't take
advice from people who call your finance 2.0 systems scams, or maybe if
they're nice, indistinguishable from a scam. But when you do, you should
think for yourself before just trusting that some authority figure has
your best interests in mind.


1) Yes, this was responsibly disclosed to the security mailing list. It
was revealed to the public a few hours later on the -dev IRC channel
without a fix.
--
'peter'[:-1]@petertodd.org
00000000000000009065ab15f4a036e9ec13d2e788e0ede69472e0ec396b097f
Jorge Timón
2014-03-22 13:53:41 UTC
Permalink
Post by Peter Todd
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release.
I'm not against about miners accepting transactions that have longer
data in non-utxo polluting OP_RETURN than whatever is specified as
standard by the reference implementation, maybe it should be risen in
standard but I think it was assumed that the most common case would be
to include the root hash of some "merklized" structure.
My only argument against non-validated proof of publication is that in
the long run it will be very expensive since they will have to compete
with transactions that actually use the utxo, a feature that is more
valuable. But that's mostly speculation and doesn't imply the need for
any action against it. I would strongly opposed to against a
limitation on OP_RETURN at the protocol level (other than the block
size limit itself, that is) and wouldn't mind if they're removed from
isStandard. I didn't payed much attention to that and honestly I don't
care enough.
Maybe this encourages miners to adopt their own policies, which could
be good for things like replace-by-fee, the rational policy for
miners, which I strongly support (combined with game theory can
provide "instant" transactions as you pointed out in another thread).

Maybe the right approach to keep improving modularity and implement
different and configurable mining policies.
Post by Peter Todd
All these methods have some overhead compared to just using OP_RETURN
and thus cost more.
I thought the consensus was that op_return was the right way to put
non-validated data in the chain, but limiting it in standard policies
doesn't seem consistent with that.
Post by Peter Todd
Finally I'll be writing something more detailed soon about why
proof-of-publication is essential and miners would be smart to support
it. But the tl;dr: of it is if you need proof-of-publication for what
your system does you're much more secure if you're embedded within
Bitcoin rather than alongside of it. There's a lot of very bad advise
getting thrown around lately for things like Mastercoin, Counterparty,
and for that matter, Colored Coins, to use a separate PoW blockchain or
a merge-mined one. The fact is if you go with pure PoW, you risk getting
attacked while your still growing, and if you go for merge-mined PoW,
the attacker can do so for free. We've got a real-world example of the
former with Twister, among many others, usually resulting in a switch to
a centralized checkpointing scheme. For the latter we have Coiledcoin,
an alt that made the mistake of using SHA256 merge-mining and got killed
off early at birth with a zero-cost 51% attack. There is of course a
censorship risk to going the embedded route, but at least we know that
for the forseeable future doing so will require explicit blacklists,
something most people here are against.
The "proof of publication vs separate chain" discussion is orthogonal
to the "merged mining vs independent chain" one.
If I remember correctly, last time you admitted after my example that
merged mining was comparatively better than a separate chain, that it
was economically harder to attack. I guess ecological arguments won't
help here, but you're confusing people developing independent chains
and thus pushing them to a less secure (apart from more wasteful
setup) system design.
Coiledcoin just proofs that merged mining may not be the best way to
bootstrap a currency, but you can also start separated and then switch
to merged mining once you have sufficient independent support.
As far as I can tell twister doesn't have a realistic reward mechanism
for miners so the incentives are broken before considering merged
mining.
Proof of work is irreversible and it's a good thing to share it.
Thanks Satoshi for proposing this great idea of merged mining and
thanks vinced for a first implementation with a data structure that
can be improved.

Peter Todd, I don't think you're being responsible or wise saying
nonsense like "merged mined chains can be attacked for free" and I
suggest that you prove your claims by attacking namecoin "for free",
please, enlighten us, how that's done?
It should be easier with the scamcoin ixcoin, with a much lower
subsidy to miners so I don't feel bad about the suggestion if your
"free attack" somehow works (certainly using some magic I don't know
about).
--
Jorge Timón

http://freico.in/
Peter Todd
2014-03-22 19:34:35 UTC
Permalink
Post by Jorge Timón
Post by Peter Todd
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release.
I'm not against about miners accepting transactions that have longer
data in non-utxo polluting OP_RETURN than whatever is specified as
standard by the reference implementation, maybe it should be risen in
standard but I think it was assumed that the most common case would be
to include the root hash of some "merklized" structure.
My only argument against non-validated proof of publication is that in
the long run it will be very expensive since they will have to compete
with transactions that actually use the utxo, a feature that is more
valuable. But that's mostly speculation and doesn't imply the need for
Well remember that my thinking re: UTXO is that we need to move to a
system like TXO commitments where storing the entirety of the UTXO set
for all eternity is *not* required. Of course, that doesn't necessarily
mean you can't have the advantages of UTXO commitments, but they need to
be limited in some reasonable way so that long-term storage requirements
do not grow without bound unreasonably. For example, having TXO
commitments with a bounded size committed UTXO set seems reasonable; old
UTXO's can be dropped from the bounded sized set, but can still be spent
via the underlying TXO commitment mechanism.
Post by Jorge Timón
any action against it. I would strongly opposed to against a
limitation on OP_RETURN at the protocol level (other than the block
size limit itself, that is) and wouldn't mind if they're removed from
isStandard. I didn't payed much attention to that and honestly I don't
care enough.
Maybe this encourages miners to adopt their own policies, which could
be good for things like replace-by-fee, the rational policy for
miners, which I strongly support (combined with game theory can
provide "instant" transactions as you pointed out in another thread).
Maybe the right approach to keep improving modularity and implement
different and configurable mining policies.
Like I said the real issue is making it easy to get those !IsStandard()
transactions to the miners who are interested in them. The service bit
flag I proposed + preferential peering - reserve, say, 50% of your
peering slots for nodes advertising non-std tx relaying - is simple
enough, but it is vulnerable to sybil attacks if done naively.
Post by Jorge Timón
Post by Peter Todd
All these methods have some overhead compared to just using OP_RETURN
and thus cost more.
I thought the consensus was that op_return was the right way to put
non-validated data in the chain, but limiting it in standard policies
doesn't seem consistent with that.
Right, but there's also a lot of the community who thinks
proof-of-publication applications are bad and should be discouraged. I
argued before that the way OP_RETURN was being deployed didn't actually
give any reason to use it vs. other data encoding methods.

Unfortunately underlying all this is a real ignorance about how Bitcoin
actually works and what proof-of-publication actually is:

14-03-20.log:12:47 < gavinandresen> jgarzik: RE: mastercoin/OP_RETURN:
what's the current thinking on Best Way To Do It? Seems if I was to do
it I'd just embed 20-byte RIPEMD160 hashes in OP_RETURN, and fetch the
real data from a DHT or website (or any-of-several websites).
Blockchain as reference ledger, not as data storage.
Post by Jorge Timón
Peter Todd, I don't think you're being responsible or wise saying
nonsense like "merged mined chains can be attacked for free" and I
suggest that you prove your claims by attacking namecoin "for free",
please, enlighten us, how that's done?
I think we're just going to have to agree to disagree on our
interpretations of the economics with regard to attacking merge-mined
chains. Myself, I'm very, very wary of systems that have poor security
against economically irrational attackers regardless of how good the
security is, in theory, against economically rational ones.

Again, what it comes down to in the end is that when I'm advising
Mastercoin, Counterparty, Colored Coins, etc. on how they should design
their systems I know that if they do proof-of-publication on the Bitcoin
blockchain, it may cost a bit more money than possible alternatives per
transaction, but the security is very well understood and robust. Fact
is, these applications can certainly afford to pay the higher
transaction fees - they're far from the least economically valuable use
of Blockchain space. Meanwhile the alternatives have, at best, much more
dubious security properties and at worse no security at all.
(announce/commit sacrifices is a great example of this, and very easy to
understand)
--
'peter'[:-1]@petertodd.org
0000000000000000bbcc531d48bea8d67597e275b5abcff18e18f46266723e91
Jorge Timón
2014-03-22 20:12:20 UTC
Permalink
Post by Peter Todd
Well remember that my thinking re: UTXO is that we need to move to a
system like TXO commitments where storing the entirety of the UTXO set
for all eternity is *not* required. Of course, that doesn't necessarily
mean you can't have the advantages of UTXO commitments, but they need to
be limited in some reasonable way so that long-term storage requirements
do not grow without bound unreasonably. For example, having TXO
commitments with a bounded size committed UTXO set seems reasonable; old
UTXO's can be dropped from the bounded sized set, but can still be spent
via the underlying TXO commitment mechanism.
Although not having to download the whole blockchain to operate a
trust-less full node is theoretically possible it is not clear that
they will work in practice or would be accepted, and we certainly
don't have that now.
So I don't think future potential theoretical scalability improvements
are solid arguments in favor of supporting proof of publication now.
Post by Peter Todd
Like I said the real issue is making it easy to get those !IsStandard()
transactions to the miners who are interested in them. The service bit
flag I proposed + preferential peering - reserve, say, 50% of your
peering slots for nodes advertising non-std tx relaying - is simple
enough, but it is vulnerable to sybil attacks if done naively.
My point is that this seems relevant to competing mining policies in general.
Post by Peter Todd
Right, but there's also a lot of the community who thinks
proof-of-publication applications are bad and should be discouraged. I
argued before that the way OP_RETURN was being deployed didn't actually
give any reason to use it vs. other data encoding methods.
Unfortunately underlying all this is a real ignorance about how Bitcoin
I understand that proof of publication is not the same thing as
regular timestamping, but requiring permanent storage in the
blockchain is not the only way you can implement proof of publication.
Mark Friedenbach proposes this:

Store hashes, or a hash root, and soft-fork that blocks are only
accepted if (a) the data tree is provided, or (b) sufficient work is
built on it and/or sufficient time has passed

This way full nodes can ignore the published data until is sufficiently buried.
Post by Peter Todd
I think we're just going to have to agree to disagree on our
interpretations of the economics with regard to attacking merge-mined
chains. Myself, I'm very, very wary of systems that have poor security
against economically irrational attackers regardless of how good the
security is, in theory, against economically rational ones.
The attacker was of course economically irrational in my previous
example for which you didn't have any complain. So I think we can
agree that a merged mined separated chain is more secure than a
non-merged mined separated chain and that attacking a merged mined
chain is not free.
By not being clear on this you're indirectly promoting non-merged
mined altchains as a better option than merged mined altchains, which
is what I don't think is responsible on your part.
Post by Peter Todd
Again, what it comes down to in the end is that when I'm advising
Mastercoin, Counterparty, Colored Coins, etc. on how they should design
their systems I know that if they do proof-of-publication on the Bitcoin
blockchain, it may cost a bit more money than possible alternatives per
transaction, but the security is very well understood and robust. Fact
is, these applications can certainly afford to pay the higher
transaction fees - they're far from the least economically valuable use
of Blockchain space. Meanwhile the alternatives have, at best, much more
dubious security properties and at worse no security at all.
(announce/commit sacrifices is a great example of this, and very easy to
understand)
I agree that we disagree on additional non-validated data in the main
chain vs merged mined chains as the best way to implement additional
features.
But please, you don't need to spread and maintain existing myths about
merged mining to make your case. If you insist on doing it I will
start to think that the honesty of your arguments is not something
important to you, and you just prefer to try to get people on your
side by any means, which would be very disappointing.
Troy Benjegerdes
2014-03-23 23:17:37 UTC
Permalink
Post by Jorge Timón
Post by Peter Todd
Right, but there's also a lot of the community who thinks
proof-of-publication applications are bad and should be discouraged. I
argued before that the way OP_RETURN was being deployed didn't actually
give any reason to use it vs. other data encoding methods.
Unfortunately underlying all this is a real ignorance about how Bitcoin
I understand that proof of publication is not the same thing as
regular timestamping, but requiring permanent storage in the
blockchain is not the only way you can implement proof of publication.
Store hashes, or a hash root, and soft-fork that blocks are only
accepted if (a) the data tree is provided, or (b) sufficient work is
built on it and/or sufficient time has passed
This way full nodes can ignore the published data until is sufficiently buried.
Post by Peter Todd
I think we're just going to have to agree to disagree on our
interpretations of the economics with regard to attacking merge-mined
chains. Myself, I'm very, very wary of systems that have poor security
against economically irrational attackers regardless of how good the
security is, in theory, against economically rational ones.
The attacker was of course economically irrational in my previous
example for which you didn't have any complain. So I think we can
agree that a merged mined separated chain is more secure than a
non-merged mined separated chain and that attacking a merged mined
chain is not free.
By not being clear on this you're indirectly promoting non-merged
mined altchains as a better option than merged mined altchains, which
is what I don't think is responsible on your part.
I can't speak for Peter, but *I* am currently of the opinion that non-merged
mined altchains using memory-hard proof-of-work are a far better option than
sha-256 merged-mined altchains. This is not a popular position on this list,
and I would like to respectfully disagree, but still collaborate on all the
other things where bitcoin-core *is* the best-in-class code available.

A truly 'distributed' system must support multiple alchains, and multiple
proof-of-work hash algorithms, and probably support proof-of-stake as well.

If sha-256 is the only game in town the only advantage over the federal
reserve is I can at least audit the code that controls the money supply,
but it's not in any way distributed if the hash power is concentrated
among 5-10 major pools and 5-10 sha-256 asic vendors.

I find it very irresponsible for Bitcoiners to on one hand extol the virtues
of distributed systems and then in the same message claim any discussion
about alternate chains as 'off-topic'.

If bitcoin-core is for *distributed systems*, then all the different altcoins
with different hash algorithms should be viable topics for discussion.

----------------------------------------------------------------------------
Troy Benjegerdes 'da hozer' ***@hozed.org
7 elements earth::water::air::fire::mind::spirit::soul grid.coop

Never pick a fight with someone who buys ink by the barrel,
nor try buy a hacker who makes money by the megahash
Mark Friedenbach
2014-03-23 23:53:48 UTC
Permalink
This isn't distributed-systems-development, it is bitcoin-development.
Discussion over chain parameters is a fine thing to have among people
who are interested in that sort of thing. But not here.
Post by Troy Benjegerdes
I find it very irresponsible for Bitcoiners to on one hand extol the virtues
of distributed systems and then in the same message claim any discussion
about alternate chains as 'off-topic'.
If bitcoin-core is for *distributed systems*, then all the different altcoins
with different hash algorithms should be viable topics for discussion.
Troy Benjegerdes
2014-03-24 20:34:03 UTC
Permalink
I think that's fair, so long as we limit bitcoin-development discussion to
issues that are relevant to the owners of the hashrate and companies that
pay developer salaries.

What I'm asking for is some honesty that Bitcoin is a centralized system
and to stop arguing technical points on the altar of distributed/decentralized
whatever. It's pretty clear if you want decentralized you should go with
altchains.

I'm here because I want to sell corn for bitcoin, and I believe it will be
more profitable for me to do that with a bitcoin-blockchain-based system
in which I have the capability to audit the code that executes the trade.
Post by Mark Friedenbach
This isn't distributed-systems-development, it is bitcoin-development.
Discussion over chain parameters is a fine thing to have among people
who are interested in that sort of thing. But not here.
Post by Troy Benjegerdes
I find it very irresponsible for Bitcoiners to on one hand extol the virtues
of distributed systems and then in the same message claim any discussion
about alternate chains as 'off-topic'.
If bitcoin-core is for *distributed systems*, then all the different altcoins
with different hash algorithms should be viable topics for discussion.
Mark Friedenbach
2014-03-24 20:57:14 UTC
Permalink
Post by Troy Benjegerdes
I'm here because I want to sell corn for bitcoin, and I believe it will be
more profitable for me to do that with a bitcoin-blockchain-based system
in which I have the capability to audit the code that executes the trade.
A discussion over such a system would be on-topic. Indeed I have made my
own proposals for systems with that capability in the past:

http://sourceforge.net/p/bitcoin/mailman/message/31322676/

There's no reason to invoke alts however. There are ways where this can
be done within the bitcoin ecosystem, using bitcoins:

http://sourceforge.net/p/bitcoin/mailman/message/32108143/
Post by Troy Benjegerdes
I think that's fair, so long as we limit bitcoin-development discussion to
issues that are relevant to the owners of the hashrate and companies that
pay developer salaries.
What I'm asking for is some honesty that Bitcoin is a centralized system
and to stop arguing technical points on the altar of distributed/decentralized
whatever. It's pretty clear if you want decentralized you should go with
altchains.
Bitcoin is not a centralized system, and neither is its development. I
don't even know how to respond to that. Bringing up altchains is a total
red herring.

This is *bitcoin*-development. Please don't make it have to become a
moderated mailing list.
Troy Benjegerdes
2014-03-25 22:10:54 UTC
Permalink
Post by Mark Friedenbach
Post by Troy Benjegerdes
I'm here because I want to sell corn for bitcoin, and I believe it will be
more profitable for me to do that with a bitcoin-blockchain-based system
in which I have the capability to audit the code that executes the trade.
A discussion over such a system would be on-topic. Indeed I have made my
http://sourceforge.net/p/bitcoin/mailman/message/31322676/
There's no reason to invoke alts however. There are ways where this can
http://sourceforge.net/p/bitcoin/mailman/message/32108143/
Post by Troy Benjegerdes
I think that's fair, so long as we limit bitcoin-development discussion to
issues that are relevant to the owners of the hashrate and companies that
pay developer salaries.
What I'm asking for is some honesty that Bitcoin is a centralized system
and to stop arguing technical points on the altar of distributed/decentralized
whatever. It's pretty clear if you want decentralized you should go with
altchains.
Bitcoin is not a centralized system, and neither is its development. I
don't even know how to respond to that. Bringing up altchains is a total
red herring.
This is *bitcoin*-development. Please don't make it have to become a
moderated mailing list.
When I can pick up a miner at Best Buy and pay it off in 9 months I'll
agree with you that bitcoin *might* be decentralized. Maybe there's a
chance this *will* happen eventually, but right now we have a couple of
mining cartels that control most of the hashrate.

There are plenty of interesting alt-hash-chains for which mass produced,
general purpose (or gpgpu-purpose) hardware exists and is in high volume
mass production.
kjj
2014-03-26 01:09:01 UTC
Permalink
Post by Troy Benjegerdes
Post by Mark Friedenbach
Bitcoin is not a centralized system, and neither is its development. I
don't even know how to respond to that. Bringing up altchains is a total
red herring.
This is *bitcoin*-development. Please don't make it have to become a
moderated mailing list.
When I can pick up a miner at Best Buy and pay it off in 9 months I'll
agree with you that bitcoin *might* be decentralized. Maybe there's a
chance this *will* happen eventually, but right now we have a couple of
mining cartels that control most of the hashrate.
There are plenty of interesting alt-hash-chains for which mass produced,
general purpose (or gpgpu-purpose) hardware exists and is in high volume
mass production.
Decentralized doesn't mean "everyone is doing it", it means "no one can
stop you from doing it". Observe bitcoin development. A few people do
the bulk of the work, a bunch more people (like me) do work ranging from
minor to trivial, and millions do nothing. And yet, it is still totally
decentralized because no one can stop anyone from making whatever
changes they want.

So it is also with mining. The world overall may make it impractical,
perhaps even foolish, for you to fire up your CPU and mine solo, but no
one is stopping you, and more to the point, no one is capable of
stopping you. There is no center from which you must ask permission.

On moderation, I note that moderation can also be done in a
decentralized fashion. I offer this long overdue example:

:0
* ^From.****@hozed.org
/dev/null
Troy Benjegerdes
2014-03-22 15:08:36 UTC
Permalink
Post by Peter Todd
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release. Secondly I noticed a
overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
into account, making it possible to broadcast unminable transactions and
bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
outputs given that the sigops limit and the way they use up a fixed 20
sigops per op makes them hard to do fee calculations for. They also make
it easy to bloat the UTXO set, potentially a bad thing. This would of
course require things using them to change. Currently that's just
Counterparty, so I gave them the heads up in my email.
I've spend some time looking at the Datacoin code, and I've come to the
conclusion the next copycatcoin I release will have an explicit 'data'
field with something like 169 bytes (a bakers dozen squared), which will
add 1 byte to each transaction if unused, and provide a small, but usable
data field for proof of publication. As a new coin, I can also do a
hardfork that increases the data size limit much easier if there is a
compelling reason to make it bigger.

I think this will prove to be a much more reliable infrastructure for
proof of publication than various hacks to overcome 40 byte limits with
Bitcoin.

I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
the market risk they face from the 40 byte limit, and put some pressure to
implement some of the alternatives Todd proposes.
--
----------------------------------------------------------------------------
Troy Benjegerdes 'da hozer' ***@hozed.org
7 elements earth::water::air::fire::mind::spirit::soul grid.coop

Never pick a fight with someone who buys ink by the barrel,
nor try buy a hacker who makes money by the megahash
Mark Friedenbach
2014-03-22 17:04:30 UTC
Permalink
Please, by all means: ignore our well-reasoned arguments about
externalized storage and validation cost and alternative solutions.
Please re-discover how proof of publication doesn't require burdening
the network with silly extra data that must be transmitted, kept, and
validated from now until the heat death of the universe. Your failure
will make my meager bitcoin holdings all the more valuable! As despite
persistent assertions to the contrary, making quality software freely
available at zero cost does not pay well, even in finance. You will not
find core developers in the Bitcoin 1%.

Please feel free to flame me back, but off-list. This is for *bitcoin*
development.
Post by Troy Benjegerdes
Post by Peter Todd
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release. Secondly I noticed a
overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
into account, making it possible to broadcast unminable transactions and
bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
outputs given that the sigops limit and the way they use up a fixed 20
sigops per op makes them hard to do fee calculations for. They also make
it easy to bloat the UTXO set, potentially a bad thing. This would of
course require things using them to change. Currently that's just
Counterparty, so I gave them the heads up in my email.
I've spend some time looking at the Datacoin code, and I've come to the
conclusion the next copycatcoin I release will have an explicit 'data'
field with something like 169 bytes (a bakers dozen squared), which will
add 1 byte to each transaction if unused, and provide a small, but usable
data field for proof of publication. As a new coin, I can also do a
hardfork that increases the data size limit much easier if there is a
compelling reason to make it bigger.
I think this will prove to be a much more reliable infrastructure for
proof of publication than various hacks to overcome 40 byte limits with
Bitcoin.
I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
the market risk they face from the 40 byte limit, and put some pressure to
implement some of the alternatives Todd proposes.
Peter Todd
2014-03-22 19:08:25 UTC
Permalink
Post by Troy Benjegerdes
Post by Peter Todd
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release. Secondly I noticed a
overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
into account, making it possible to broadcast unminable transactions and
bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
outputs given that the sigops limit and the way they use up a fixed 20
sigops per op makes them hard to do fee calculations for. They also make
it easy to bloat the UTXO set, potentially a bad thing. This would of
course require things using them to change. Currently that's just
Counterparty, so I gave them the heads up in my email.
I've spend some time looking at the Datacoin code, and I've come to the
conclusion the next copycatcoin I release will have an explicit 'data'
field with something like 169 bytes (a bakers dozen squared), which will
add 1 byte to each transaction if unused, and provide a small, but usable
data field for proof of publication. As a new coin, I can also do a
hardfork that increases the data size limit much easier if there is a
compelling reason to make it bigger.
I think this will prove to be a much more reliable infrastructure for
proof of publication than various hacks to overcome 40 byte limits with
Bitcoin.
I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
the market risk they face from the 40 byte limit, and put some pressure to
implement some of the alternatives Todd proposes.
Lol! Granted, I guess I should "disclose" that I'm working on tree
chains, which just improve the scalability of blockchains directly. I'm
think tree-chains could be implemented as a soft-fork; if applied to
Bitcoin the datacoin 1% might face market risk. :P
--
'peter'[:-1]@petertodd.org
0000000000000000bbcc531d48bea8d67597e275b5abcff18e18f46266723e91
Troy Benjegerdes
2014-03-23 22:37:52 UTC
Permalink
Post by Peter Todd
Post by Troy Benjegerdes
Post by Peter Todd
There's been a lot of recent hoopla over proof-of-publication, with the
OP_RETURN <data> length getting reduced to a rather useless 40 bytes at
the last minute prior to the 0.9 release. Secondly I noticed a
overlooked security flaw in that OP_CHECKMULTISIG sigops weren't taken
into account, making it possible to broadcast unminable transactions and
bloat mempools.(1) My suggestion was to just ditch bare OP_CHECKMULTISIG
outputs given that the sigops limit and the way they use up a fixed 20
sigops per op makes them hard to do fee calculations for. They also make
it easy to bloat the UTXO set, potentially a bad thing. This would of
course require things using them to change. Currently that's just
Counterparty, so I gave them the heads up in my email.
I've spend some time looking at the Datacoin code, and I've come to the
conclusion the next copycatcoin I release will have an explicit 'data'
field with something like 169 bytes (a bakers dozen squared), which will
add 1 byte to each transaction if unused, and provide a small, but usable
data field for proof of publication. As a new coin, I can also do a
hardfork that increases the data size limit much easier if there is a
compelling reason to make it bigger.
I think this will prove to be a much more reliable infrastructure for
proof of publication than various hacks to overcome 40 byte limits with
Bitcoin.
I am disclosing this here so the bitcoin 1% has plenty of time to evaluate
the market risk they face from the 40 byte limit, and put some pressure to
implement some of the alternatives Todd proposes.
Lol! Granted, I guess I should "disclose" that I'm working on tree
chains, which just improve the scalability of blockchains directly. I'm
think tree-chains could be implemented as a soft-fork; if applied to
Bitcoin the datacoin 1% might face market risk. :P
Soft-fork tree chains with reasonable data/memo/annotation storage would be
extremely interesting. The important question, however, is how does one
build a *business* around such a thing, including getting paid as a developer.

What I find extremely relevant to the **bitcoin-dev** list are discussions
about how to motivate the people who own the hashrate and bulk of the coins
(aka, the bitcoin 1%) to PAY DEVELOPERS, and thus it is good marketing FOR
BITCOIN DEVELOPERS to remind the people who profit from our efforts they need
to make it profitable for developers to work on bitcoin.

If it's more profitable for innovative developers to premine and release
$NEWCOIN-blockchain than it is to work on Bitcoin-blockchain, is that a valid
discussion for this list? Or do you just want to stick your heads in the sand
while VC's look to disrupt Bitcoin?
Peter Todd
2014-03-25 12:28:51 UTC
Permalink
Btw, any chance we could get a summary description of tree-chains
posted to bitcoin-development?
Sure:

Introduction
============

Bitcoin doesn't scale. There's a lot of issues at hand here, but the
most fundemental of them is that to create a block you need to update
the state of the UTXO set, and the way Bitcoin is designed means that
updating that state requires bandwidth equal to all the transaction
volume to keep up with the changes to what set. Long story short, we get
O(n^2) scaling, which is just plain infeasible.

So let's split up the transaction volume so every individual miner only
needs to keep up with some portion. In a rough sense that's what
alt-coins do - all the tipping microtransactions on Doge never have to
hit the Bitcoin blockchain for instance, reducing pressure on the
latter. But moving value between chains is inconvenient; right now
moving value requires trusted third parties. Two-way atomic chain
transfers does help here, but as recent discussions on the topic showed
there's all sorts of edge cases with reorganizations that are tricky to
handle; at worst they could lead to inflation.

So what's the underlying issue there? The chains are too independent.
Even with merge-mining there's no real link between one chain and
another with regard to the order of transactions. Secondly merge-mining
suffers from 51% attacks if the chain being merge-mined doesn't have a
majority of total hashing power... which kinda defeats the point if
we're worried about miner scalability.


Blocks and the TXO set as a binary radix tree
=============================================

So how can we do better? Start with the "big picture" idea and take the
linear blockchain and turn it into a tree:

┌───────┮───────┐
┌───┮───┐ ┌───┮───┐
┌─┮─┐ ┌─┮─┐ ┌─┮─┐ ┌─┮─┐
┌┮┐ ┌┮┐ ┌┮┐ ┌┮┐ ┌┮┐ ┌┮┐ ┌┮┐ ┌┮┐

Obviously if we could somehow split up the UTXO set such that individual
miners/full nodes only had to deal with subsets of this tree we could
significantly reduce the bandwidth that any one miner would need to
process. Every transaction output would get a unique identifier, say
txoutid=H(txout) and we put those outputs in blocks appropriately.

We can't just wave a magic wand and say that every block has the above
structure and all miners co-ordinate to generate all blocks in one go.
Instead we'll do something akin to merge mining. Start with a linear
blockchain with ten blocks. Arrows indicate hashing:

a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ a4 ⇜ a5 ⇜ a6 ⇜ a7 ⇜ a8 ⇜ a9

The following data structure could be the block header in this scheme.
We'll simplify things a bit and make up our own; obviously with some
more effort the standard Satoshi structures can be used too:

struct BlockHeader:
uint256 prevBlockHash
uint256 blockContentsHash
uint256 target
uint256 nonce
uint time

For now we'll say this is a pure-proof-of-publication chain, so our
block contents are very simple:

struct BlockContents:
uint256 merkleRoot

As usual the PoW is valid if H(blockHeader) < blockHeader.target. Every
block creates new txouts, and the union of all such txouts is the txout
set. As shown previously(1) this basic proof-of-publication
functionality is sufficient to build a crypto-currency even without
actually validating the contents of the so-called transaction outputs.

The scalability of this sucks, so let's add two more chains below the
root to start forming a tree. For fairness we'll only allow miners to
either mine a, a+b, or a+c; attempting to mine a block with both the b
and c chains simultaneously is not allowed.

struct BlockContents:
uint256 childBlockHash # may be null
bool childSide # left or right
uint256 merkleRoot

Furthermore we shard the TXO space by defining txoid = H(txout) and
allowing any txout in chain a, and only txouts with LSB=0 in b, LSB=1 in
c; the beginning of a binary radix tree. With some variance thrown in we
get the following:

b0 ⇜⇜ b1 ⇜⇜⇜⇜⇜ b2 ⇜ b3 ⇜ b4 ⇜ b5 ⇜ b6 ⇜ b7 ⇜ b8
↙ ↙
a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜⇜⇜⇜⇜⇜ a4 ⇜ a5 ⇜ a6 ⇜ a7 ⇜ a8
↖ ↖ ↖ ↖ ↖
c0 ⇜ c1 ⇜ c2 ⇜ c3 ⇜⇜⇜⇜⇜⇜ c4 ⇜ c5 ⇜ c6 ⇜⇜⇜⇜⇜⇜ c7


We now have three different versions of the TXO set: ∑a, ∑a + ∑b, and
∑a+∑c. Each of these versions is consistent in that for a given txoutid
prefix we can achieve consensus over the contents of the TXO set. Of
course, this definition is recursive:

a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜⇜⇜⇜⇜⇜ a4 ⇜ a5 ⇜ a6 ⇜ a7 ⇜ a8
↖ ↖ ↖ ↖ ↖
c0 ⇜ c1 ⇜ c2 ⇜ c3 ⇜⇜⇜⇜⇜⇜ c4 ⇜ c5 ⇜ c6 ⇜⇜⇜⇜⇜⇜ c7
↖ ↖ ↖ ↖ ↖
d0 ⇜ d1 ⇜⇜⇜⇜⇜⇜ d2 ⇜⇜⇜⇜⇜⇜ d3 ⇜ d4 ⇜⇜⇜ d5 ⇜⇜⇜⇜ d6

Unicode unfortunately lacks 3D box drawing at present, so I've only
shown left-sided child chains.


Herding the child-chains
========================

If all we were doing was publishing data, this would suffice. But what
if we want to syncronize our actions? For instance, we may want a new
txout to only be published in one chain if the corresponding txout in
another is marked spent. What we want is a reasonable rule for
child-chains to be invalidated when their parents are invalidated so as
to co-ordinate actions across distant child chains by relying on the
existance of their parents.

We start by removing the per-chain difficulties, leaving only a single
master proof-of-work target. Solutions less than target itself are
considered valid in the root chain, less than the target << 1 in the
root's children, << 2 in the children's children etc. In children that
means the header no longer contains a time, nonce, or target; the values
in the root block header are used instead:

struct ChildBlockHeader:
uint256 prevChildBlockHash
uint256 blockContentsHash

For a given chain we always choose the one with the most total work. But
to get our ordering primitive we'll add a second, somewhat brutal, rule:
Parent always wins.

We achieve this moving the child block header into the parent block
itself:

struct BlockContents:
ChildBlockHeader childHeader # may be null (zeroed out)
bool childSide # left or right
bytes txout

Let's look at how this works. We start with a parent and a child chain:

a0 ⇜ a1 ⇜ a2 ⇜ a3
↖ ↖
b0 ⇜ b1 ⇜ b2 ⇜ b3 ⇜ b4 ⇜ b5

First there is the obvious scenario where the parent chain is
reorganized. Here our node learns of a2 ⇜ a3' ⇜ a4':

⇜ a3' ⇜ a4'
a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ X
↖ ↖
b0 ⇜ b1 ⇜ b2 ⇜ b3 ⇜ X

Block a3 is killed, resulting in the orphaning of b3, b4, and b5:

a0 ⇜ a1 ⇜ a2 ⇜ a3' ⇜ a4'
↖
b0 ⇜ b1 ⇜ b2

The second case is when a parent has a conflicting idea about what the
child chian is. Here our node receives block a5, which has a conflicting
idea of what child b2 is:

a0 ⇜ a1 ⇜ a2 ⇜ a3' ⇜ a4' ⇜ a5
↖ ↖
b0 ⇜ b1 ⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜ b2'
⇜ b2 ⇜ X

As the parent always wins, even multiple blocks can get killed off this
way:


a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ a4
↖
b0 ⇜ b1 ⇜ b2 ⇜ b3 ⇜ b4 ⇜ b5 ⇜ b6 ⇜ b7

to:

a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ a4 ⇜ a5
↖ ↖
b0 ⇜ b1 ⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜ b2'
⇜ b2 ⇜ b3 ⇜ b4 ⇜ b5 ⇜ X

This behavior is easier to understand if you say instead that the node
learned about block b2', which had more total work than b2 as the sum
total of work done in the parent chain in blocks specifying the that
particular child chain is considered before comparing the total work
done in only the child chain.

It's important to remember that the parent blockchain has and validates
both childrens' block headers; it is not possible to mine a block with
an invalid secret of child headers. For instance with the following:

a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ a4
↖ ↖ ↖
b0 ⇜ b1 ⇜ b2 ⇜ b3 ⇜ b4 ⇜ b5 ⇜ b6 ⇜ b7

I can't mine block a5 that says following b2 is b2' in an attempt to
kill off b2 through b7.


Token transfer with tree-chains
===============================

How can we make use of this? Lets start with a simple discrete token
transfer system. Transactions are simply:

struct Transaction:
uint256 prevTxHash
script prevPubKey
script scriptSig
uint256 scriptPubKeyHash

We'll say transactions go in the tree-chain according to their
prevTxHash, with the depth in the tree equal to the depth of the
previous output. This means that you can prove an output was created by
the existance of that transaction in the block with prefix matching
H(tx.prevTxHash), and you can prove the transaction output is unspent by
the non-existance of a transaction in the block with prefix matching
H(tx).

With our above re-organization rule everything is consistent too: if
block b_i contains tx1, then the corresponding block c_j can contain a
valid tx2 spending tx1 provided that c_j depends on a_p and there is a
path from a_p to b_(i+k). Here's an example, starting with tx1 in c2:

b0 ⇜⇜⇜⇜⇜⇜ b1
↙
a0 ⇜ a1 ⇜ a2
↖
c0 ⇜ c1 ⇜ c2

Block b2 below can't yet contain tx2 because there is no path:

b0 ⇜⇜⇜⇜⇜⇜ b1 ⇜ b2
↙
a0 ⇜ a1 ⇜ a2
↖
c0 ⇜ c1 ⇜ c2

However now c3 is found, whose PoW solution was also valid for a3:

b0 ⇜⇜⇜⇜⇜⇜ b1 ⇜ b2
↙
a0 ⇜ a1 ⇜ a2 ⇜ a3
↖ ↖
c0 ⇜ c1 ⇜ c2 ⇜ c3

Now b3 can contain tx2, as b3 will also attempt to create a4, which
depends on a3:

b0 ⇜⇜⇜⇜⇜⇜ b1 ⇜ b2 ⇜ b3
↙
a0 ⇜ a1 ⇜ a2 ⇜ a3
↖ ↖
c0 ⇜ c1 ⇜ c2 ⇜ c3

Now that a3 exists, block c2 can only be killed if a3 is, which would
also kill b3 and thus destroy tx2.


Proving transaction output validity in a token transfer system
==============================================================

How cheap is it to prove the entire history of a token is valid from
genesis? Perhaps surprisingly, without any cryptographic moon-math the
cost is only linear!

Remember that a transaction in a given chain has committed to the chain
that it can be spent in. If Alice is to prove to Bob that the output she
gave him is valid, she simply needs to prove that for every transaction
in the histroy of the token the token was created, remained unspent,
then finally was spent. Proving a token remained unspent between blocks
b_n and b_m is trivially possible in linear size. Once the token is
spent nothing about blocks beyond b_m is required. Even if miners do not
validate transactions at all the proof size remains linear provided
blocks themselves have a maximum size - at worst the proof contains some
invalid transactions that can be shown to be false spends.

While certainly inconvenient, it is interesting how such a simple system
appears to in theory scale to unlimited numbers of transactions and with
an appropriate exchange rate move unlimited amounts of value. A possible
model would be for the the tokens themselves to have power of two
values, and be split and combined as required.


The lost data problem
=====================

There is however a catch: What happens when blocks get lost? Parent
blocks only contain their childrens' headers, not the block contents.
At some point the difficulty of producing a block will drop sufficiently
for malicious or accidental data loss to be possible. With the "parent
chain wins" rule it must be possible to recover from that event for
mining on the child to continue.

Concretely, suppose you have tx1 in block c2, which can be spent on
chain b. The contents of chain a is known to you, but the full contents
of chain b are unavailable:

b0 ⇜ b1 (b) (b)
↙ ↙ ↙
a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ a4 ⇜ a5
↖ ↖
c0 ⇜ c1 ⇜ c2 ⇜ c3 ⇜ c4 ⇜ c5

Blocks a3 and a4 are known to have children on b, but the contents of
those children are unavailable. We can define some ratio of unknown to
known blocks that must be proven for the proof to be valid. Here we
show a 1:1 ratio:

⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜⇜
b0 ⇜ b1 (b) (b) b2 ⇜ b3 ⇜ b4 ⇜ b5 ⇜ b6 ⇜ b7
↙ ↙ ↙ ↙ ↙ ↙
a0 ⇜ a1 ⇜ a2 ⇜ a3 ⇜ a4 ⇜ a5 ⇜ a6 ⇜ a7 ⇜ a8 ⇜ a9
↖ ↖ ↖
c0 ⇜ c1 ⇜ c2 ⇜ c3 ⇜ c4 ⇜ c5 ⇜ c6 ⇜ c7 ⇜ c8 ⇜ c9


The proof of now shows that while a3 and a4 has b-side blocks, by the
time you reach b6 those two lost blocks were in the minority. Of course
a real system needs to be careful that mining blocks and then discarding
them isn't a profitably way to create coins out of thin air - ratios
well in excess of 1:1 are likely to be required.


Challenge-response resolution
=============================

Another idea is to say if the parent blockchain's contents are known we
can insert a challenge into it specifying that a particular child block
be published verbatim in the parent. Once the challenge is published
further parent blocks may not reference that children on that side until
either the desired block is re-republished or some timeout is reached.
If the timeout is reached, mining backtracks to some previously known
child specified in the challenge. In the typical case the block is known
to a majority of miners, and is published, essentially allowing new
miners to force the existing ones to "cough up" blocks they aren't
publishing and allow the new ones to continue mining. (obviously some
care needs to be taken with regard to incentives here)

While an attractive idea, this is our first foray into moon math.
Suppose such a challenge was issued in block a2, asking for the contents
of b1 to be published. Meanwhile tx1 is created in block c3, and can
only be spent on a b-side chain:

b0 ⇜ b1
↙
a0 ⇜ a1 ⇜ (a2) ⇜ a3
↖
c0 ⇜ c1 ⇜ c2 ⇜ c3

The miners of the b-chain can violate the protocol by mining a4/b1',
where b1' appears to contain valid transaction tx2:


b0 ⇜ b1 b1'
↙ ↙
a0 ⇜ a1 ⇜ (a2) ⇜ a3 ⇜ a4
↖
c0 ⇜ c1 ⇜ c2 ⇜ c3

A proof of tx2 as valid payment would entirely miss fact that the
challenge was published and thus not know that b1' was invalid. While
I'm sure the reader can come up with all kinds of complex and fragile
way of proving fraud to cause chain a to be somehow re-organized, what
we really want is some sub-linear proof of honest computation. Without
getting into details, this is probably possible via some flavor of
sub-linear moon-math proof-of-execution. But this paper is too long
already to start getting snarky.


Beyond token transfer systems
=============================

We can extend our simple one txin, one txout token transfer transactions
with merkle (sum) trees. Here's a rough sketch of the concept:

input #1─┐ ┌─output #1
├┐ ┌─
input #2─┘│ │└─output #2
├──
input #3─┐│ │┌─output #3
├┘ └─
input #4─┘ └─output #4

Where previously a transaction committed to a specific transaction
output, we can make our transactions commit to a merkle-sum-tree of
transaction outputs. To then redeem a transaction output you prove that
enough prior outputs were spend to add up to the new output's value. The
entire process can happen incrementally without any specific
co-operation between miners on different parts of the chain, and inputs
and outputs can come from any depth in the tree provided that care is
taken to ensure that reorganization is not profitable.

Like the token transfer system proving a given output is valid has cost
linear with history. However we can improve on that using
non-interactive proof techniques. For instance in the linear token
transfer example the history only needs to be proven to a point where
the transaction fees are higher than the value of the output. (easiest
where the work required to spend a txout of a given value is well
defined) A similar approach can be easily taken with the
directed-acyclic-graph of mutliple-input-output transactions. Secondly
non-interactive proof techniques can also be used, again out of the
scope of this already long preliminary paper.


1) "Disentangling Crypto-Coin Mining: Timestamping,
Proof-of-Publication, and Validation",
http://www.mail-archive.com/bitcoin-development%40lists.sourceforge.net/msg03307.html
--
'peter'[:-1]@petertodd.org
00000000000000002fd949770524eea54446adb70603a90a4c493d345f890e04
Gavin Andresen
2014-03-25 12:45:00 UTC
Permalink
Post by Peter Todd
Bitcoin doesn't scale. There's a lot of issues at hand here, but the
most fundemental of them is that to create a block you need to update
the state of the UTXO set, and the way Bitcoin is designed means that
updating that state requires bandwidth equal to all the transaction
volume to keep up with the changes to what set. Long story short, we get
O(n^2) scaling, which is just plain infeasible.
We have a fundamental disagreement here.

If you go back and read Satoshi's original thoughts on scaling, it is clear
that he imagined tens of thousands of mining nodes and hundreds of millions
of lightweight SPV users.

Scaling is a problem if every person is a fully validating node; then,
indeed, you get an O(n^2) problem. Which can be solved by extending some
tentative trust to your peers, but lets put all those possible solutions
aside.

Given tens of thousands of fully validating nodes, you get O(m*n), where m
is the number of fully validating peers and is a large constant (10s of
thousands).

We don't know how large m can or will be; we have only just started to
scale up.

"Bitcoin doesn't scale" is pure FUD. It might not scale in exactly the way
you want, but it WILL scale.
--
--
Gavin Andresen
Chief Scientist, Bitcoin Foundation
https://www.bitcoinfoundation.org/
Peter Todd
2014-03-25 13:49:18 UTC
Permalink
Post by Gavin Andresen
Post by Peter Todd
Bitcoin doesn't scale. There's a lot of issues at hand here, but the
most fundemental of them is that to create a block you need to update
the state of the UTXO set, and the way Bitcoin is designed means that
updating that state requires bandwidth equal to all the transaction
volume to keep up with the changes to what set. Long story short, we get
O(n^2) scaling, which is just plain infeasible.
We have a fundamental disagreement here.
If you go back and read Satoshi's original thoughts on scaling, it is clear
that he imagined tens of thousands of mining nodes and hundreds of millions
of lightweight SPV users.
Yeah, about that...

https://blockchain.info/pools

For someone with 'Chief Scientist' as their job title, I'm surprised you
think so little of hard evidence and so much of idol worshipping.


P.S. A year or so ago you complained that if I cared so much about
decentralization, I should make P2Pool better. Your homework: What do
tree-chains and Andrew Miller's non-outsourcable puzzles(1) have to do
with that? What about the cube-square law? And why don't I think TXO
commitments solve the blocksize problem?

1) https://bitcointalk.org/index.php?topic=309073.0;all
--
'peter'[:-1]@petertodd.org
000000000000000020366a15799010ae0432be831c197e06b19133028a9aa6f3
Mike Hearn
2014-03-25 15:20:05 UTC
Permalink
A few months ago I had a conversation with an executive at a Bitcoin
company, and I suggested their developers should get involved with the
development list. I was told that they are all subscribed but refuse to
post. Puzzled, I asked why, maybe the process isn't clear or we didn't talk
about what they were interested in? No, it's because in that executives
words "They see how Peter Todd shoots people down in flames and want
nothing to do with that".

Peter, you were named explicitly as the source of the problem. Your
immediate knee-jerk reaction to anyone who disagrees with you is making
this forum aggressive and ugly - it puts other people off from
contributing. For what it's worth, if I were the moderator of this list I
would have banned you a long time ago because I value a friendly atmosphere
more than your "insights", which are often deeply suspect (as in this case).

Besides, ground up redesigns of Bitcoin like what you propose are more
appropriate for bitcointalk. So please take it there.
Peter Todd
2014-03-25 16:47:46 UTC
Permalink
For the record, tree chains is designed to be a soft-fork upgrade to bitcoin, at least if we can get the economics to work out. Assuming it does, you would do this by defining bitcoin itself to be the top level chain, and carrying what appear to be anyone can spend txouts from block to block so that transaction outputs can be created when funds are redeemed in the top block chain from children lower in the tree. Very similar ideas as the chain to chain stuff via spv proofs that Mark and Adam were talking about here earlier, although I think the order and reorganisation guarantees is a big advantage over their unsynched chain model. Most of the other ideas are identical, and they deserve credit.

I'm on the currency design panel at the Princeton Bitcoin Research Conference this week - tree chains will be discussed informally if not on the panel itself.

Regarding cryptocurrency research related posts, the feedback I've gotten has always been quite positive. You are in the minority as far as I can tell, and anyway the volume of such posts is a small part of the total list volume.


As for the rest of your email, doctor, heal thyself. Gavin's constant namecalling of legit and well accepted scaling concerns as FUD has irritated many people for over a year now, among many other things. Statements similar to what you claim are said about me are also often said to me about you and Gavin.

But anyway, reply off list please.
Post by Mike Hearn
A few months ago I had a conversation with an executive at a Bitcoin
company, and I suggested their developers should get involved with the
development list. I was told that they are all subscribed but refuse to
post. Puzzled, I asked why, maybe the process isn't clear or we didn't talk
about what they were interested in? No, it's because in that executives
words "They see how Peter Todd shoots people down in flames and want
nothing to do with that".
Peter, you were named explicitly as the source of the problem. Your
immediate knee-jerk reaction to anyone who disagrees with you is making
this forum aggressive and ugly - it puts other people off from
contributing. For what it's worth, if I were the moderator of this list I
would have banned you a long time ago because I value a friendly atmosphere
more than your "insights", which are often deeply suspect (as in this case).
Besides, ground up redesigns of Bitcoin like what you propose are more
appropriate for bitcointalk. So please take it there.
Jeff Garzik
2014-03-25 17:37:00 UTC
Permalink
Post by Peter Todd
For someone with 'Chief Scientist' as their job title, I'm surprised you
think so little of hard evidence and so much of idol worshipping.
Peter, take this unprofessional, personal crap off-list.

Mike's anecdote of hostility is not an isolated one. Just today, a
bitcore developer commented on "Peter Todd's ..apocalyptic vision
and... negative view on bitcoin" which turned off some other
developers from participating more interactively.

As I commented on IRC, open source projects are no strangers to people
who simultaneously (a) make useful contributions and (b) turn
potential contributors away with an abrasive or hostile attitude
toward others. It's an unsolved problem in OSS, that I saw for 15+
years in the Linux kernel community.

For this list, as Mike suggested on IRC, introducing an openly stated
moderation policy may be the one route.
--
Jeff Garzik
Bitcoin core developer and open source evangelist
BitPay, Inc. https://bitpay.com/
Alan Reiner
2014-03-25 18:02:03 UTC
Permalink
I would echo the need for some kind of moderation.

I believe Peter Todd is an extremely intelligent individual, who has a
lot to offer the Bitcoin community. He has a firm grasp of a lot of
really deep Bitcoin concepts and his *technical* insight is generally
positive. Technically. But the way he communicates on this list is
*extremely* corrosive and breeds hostility. It makes it a scary place
to discuss things, with frequent, public ridicule of everything posted.

I agree that I would rather have a friendly environment to discuss
technicals, even if it means losing additional technical insight.
People who would explicitly insult other contributors intelligence and
character on a public list should be subject to some kind of negative
reinforcement. Maybe there's solutions other than outright banning.

-Alan
Post by Jeff Garzik
Post by Peter Todd
For someone with 'Chief Scientist' as their job title, I'm surprised you
think so little of hard evidence and so much of idol worshipping.
Peter, take this unprofessional, personal crap off-list.
Mike's anecdote of hostility is not an isolated one. Just today, a
bitcore developer commented on "Peter Todd's ..apocalyptic vision
and... negative view on bitcoin" which turned off some other
developers from participating more interactively.
As I commented on IRC, open source projects are no strangers to people
who simultaneously (a) make useful contributions and (b) turn
potential contributors away with an abrasive or hostile attitude
toward others. It's an unsolved problem in OSS, that I saw for 15+
years in the Linux kernel community.
For this list, as Mike suggested on IRC, introducing an openly stated
moderation policy may be the one route.
slush
2014-03-25 18:13:36 UTC
Permalink
I fully agree, please keep friendly environment on this list. Btw I also
met people who were making fun about Peter's reactions on bitcoin-dev.

slush
Post by Alan Reiner
I would echo the need for some kind of moderation.
I believe Peter Todd is an extremely intelligent individual, who has a
lot to offer the Bitcoin community. He has a firm grasp of a lot of
really deep Bitcoin concepts and his *technical* insight is generally
positive. Technically. But the way he communicates on this list is
*extremely* corrosive and breeds hostility. It makes it a scary place
to discuss things, with frequent, public ridicule of everything posted.
I agree that I would rather have a friendly environment to discuss
technicals, even if it means losing additional technical insight.
People who would explicitly insult other contributors intelligence and
character on a public list should be subject to some kind of negative
reinforcement. Maybe there's solutions other than outright banning.
-Alan
Post by Jeff Garzik
Post by Peter Todd
For someone with 'Chief Scientist' as their job title, I'm surprised you
think so little of hard evidence and so much of idol worshipping.
Peter, take this unprofessional, personal crap off-list.
Mike's anecdote of hostility is not an isolated one. Just today, a
bitcore developer commented on "Peter Todd's ..apocalyptic vision
and... negative view on bitcoin" which turned off some other
developers from participating more interactively.
As I commented on IRC, open source projects are no strangers to people
who simultaneously (a) make useful contributions and (b) turn
potential contributors away with an abrasive or hostile attitude
toward others. It's an unsolved problem in OSS, that I saw for 15+
years in the Linux kernel community.
For this list, as Mike suggested on IRC, introducing an openly stated
moderation policy may be the one route.
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Peter Todd
2014-03-25 19:47:15 UTC
Permalink
OK, deal. You guys stop calling my concerns FUD, accusing me of having ulterior motives, etc. and I'll pay the same respect to you.
Post by slush
I fully agree, please keep friendly environment on this list. Btw I also
met people who were making fun about Peter's reactions on bitcoin-dev.
slush
Post by Alan Reiner
I would echo the need for some kind of moderation.
I believe Peter Todd is an extremely intelligent individual, who has
a
Post by Alan Reiner
lot to offer the Bitcoin community. He has a firm grasp of a lot of
really deep Bitcoin concepts and his *technical* insight is generally
positive. Technically. But the way he communicates on this list is
*extremely* corrosive and breeds hostility. It makes it a scary
place
Post by Alan Reiner
to discuss things, with frequent, public ridicule of everything
posted.
Post by Alan Reiner
I agree that I would rather have a friendly environment to discuss
technicals, even if it means losing additional technical insight.
People who would explicitly insult other contributors intelligence
and
Post by Alan Reiner
character on a public list should be subject to some kind of negative
reinforcement. Maybe there's solutions other than outright banning.
-Alan
Post by Jeff Garzik
Post by Peter Todd
For someone with 'Chief Scientist' as their job title, I'm
surprised you
Post by Alan Reiner
Post by Jeff Garzik
Post by Peter Todd
think so little of hard evidence and so much of idol worshipping.
Peter, take this unprofessional, personal crap off-list.
Mike's anecdote of hostility is not an isolated one. Just today, a
bitcore developer commented on "Peter Todd's ..apocalyptic vision
and... negative view on bitcoin" which turned off some other
developers from participating more interactively.
As I commented on IRC, open source projects are no strangers to
people
Post by Alan Reiner
Post by Jeff Garzik
who simultaneously (a) make useful contributions and (b) turn
potential contributors away with an abrasive or hostile attitude
toward others. It's an unsolved problem in OSS, that I saw for 15+
years in the Linux kernel community.
For this list, as Mike suggested on IRC, introducing an openly
stated
Post by Alan Reiner
Post by Jeff Garzik
moderation policy may be the one route.
------------------------------------------------------------------------------
Post by Alan Reiner
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and
their
Post by Alan Reiner
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
------------------------------------------------------------------------
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Troy Benjegerdes
2014-03-25 21:41:17 UTC
Permalink
Peter,

I think you and I both know there is WAAYY to much MONEY to be taken
from naive end-users by the companies that employ people who call
your concerns FUD.

And for everyone else, I want to apologize in advance for anything
I might happen to say that might be abrasive, arrogant, angry, or
'in need of moderation'. So for those who do not wish to hear or
read such things, delete my message now.

===================
disclaimer: strong language follows
===================





What the fuck Groupthink?
committee for GROUPTHINKPROFIT?

I'd rather have Peter Todd calling some developers idiots on the
list than some fucking idiots who get paid way to fucking much
calling 'end-users' stupid for believing MtGox. Hell, I was one
of these idiots that fell for a marketing scam by a company that
had a good story.


But here is the damn point. The Excecutive who was whining about
how his devs won't show up should probably consider hiring people
who make VOCAL points on the mailing list. Or maybe he should
consider that his developers might know his business model is
shit and if they DID say something, it would be CLEAR to the
world that only an idiot would use their companies services, and
kill the company.

Would you rather hear of vulnerabilities and scaling limits on
bitcoin-development, or would you rather hear about them by a
chorus of "They got hacked, their code must suck", but AFTER
the fact.

It seems to be an unfortunate fact of life that sleazy people
take a shitload of money from nice people. Moderate Peter and
I into oblivion at your own risk. Wouldn't you rather have us
pointing out obvious flaws than ignoring shit?

... But just remember, your employers probably make more money
by ignoring shit....
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
OK, deal. You guys stop calling my concerns FUD, accusing me of having ulterior motives, etc. and I'll pay the same respect to you.
Post by slush
I fully agree, please keep friendly environment on this list. Btw I also
met people who were making fun about Peter's reactions on bitcoin-dev.
slush
Post by Alan Reiner
I would echo the need for some kind of moderation.
I believe Peter Todd is an extremely intelligent individual, who has
a
Post by Alan Reiner
lot to offer the Bitcoin community. He has a firm grasp of a lot of
really deep Bitcoin concepts and his *technical* insight is generally
positive. Technically. But the way he communicates on this list is
*extremely* corrosive and breeds hostility. It makes it a scary
place
Post by Alan Reiner
to discuss things, with frequent, public ridicule of everything
posted.
Post by Alan Reiner
I agree that I would rather have a friendly environment to discuss
technicals, even if it means losing additional technical insight.
People who would explicitly insult other contributors intelligence
and
Post by Alan Reiner
character on a public list should be subject to some kind of negative
reinforcement. Maybe there's solutions other than outright banning.
-Alan
Post by Jeff Garzik
Post by Peter Todd
For someone with 'Chief Scientist' as their job title, I'm
surprised you
Post by Alan Reiner
Post by Jeff Garzik
Post by Peter Todd
think so little of hard evidence and so much of idol worshipping.
Peter, take this unprofessional, personal crap off-list.
Mike's anecdote of hostility is not an isolated one. Just today, a
bitcore developer commented on "Peter Todd's ..apocalyptic vision
and... negative view on bitcoin" which turned off some other
developers from participating more interactively.
As I commented on IRC, open source projects are no strangers to
people
Post by Alan Reiner
Post by Jeff Garzik
who simultaneously (a) make useful contributions and (b) turn
potential contributors away with an abrasive or hostile attitude
toward others. It's an unsolved problem in OSS, that I saw for 15+
years in the Linux kernel community.
For this list, as Mike suggested on IRC, introducing an openly
stated
Post by Alan Reiner
Post by Jeff Garzik
moderation policy may be the one route.
------------------------------------------------------------------------------
Post by Alan Reiner
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and
their
Post by Alan Reiner
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
------------------------------------------------------------------------
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
------------------------------------------------------------------------
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
-----BEGIN PGP SIGNATURE-----
Version: APG v1.0.9
iQFQBAEBCAA6BQJTMd1DMxxQZXRlciBUb2RkIChsb3cgc2VjdXJpdHkga2V5KSA8
cGV0ZUBwZXRlcnRvZGQub3JnPgAKCRAZnIM7qOfwhb89B/98Tb0Xncho+1cbja1K
R9xYOKPhWU5EIuPr7zbpuQxufuM8hZsyFSo/ptnQnJ8EAJ2GvUUEnE2vDDjvqqJm
vy5URtOwKc6ztBDrjtWToKCgBwpJTektWrJMu2FQaO5CV/4sHhVM4By8BoDvCNLt
xeN7BccjvlDZ+2ggRaYt4P/QKctEyt9qZrdDmIsNxUa+bLzplHoqdoQMjQ2CUcUA
T+/Lq7MH+vROJXqx7d3JSsZAQ59evQDyorvCrxNgfVbB7j10t1zr5r5viWUEDtZ5
/9DAP92vpSCokmKWfSlysHbC4KEqWglWka7aSBLXmAVrJeFxJRojsLQbCKUUFrG0
IigO
=91oy
-----END PGP SIGNATURE-----
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
--
----------------------------------------------------------------------------
Troy Benjegerdes 'da hozer' ***@hozed.org
7 elements earth::water::air::fire::mind::spirit::soul grid.coop

Never pick a fight with someone who buys ink by the barrel,
nor try buy a hacker who makes money by the megahash
Ricardo Filipe
2014-03-25 20:40:40 UTC
Permalink
Post by Peter Todd
Post by Gavin Andresen
Post by Peter Todd
Bitcoin doesn't scale. There's a lot of issues at hand here, but the
most fundemental of them is that to create a block you need to update
the state of the UTXO set, and the way Bitcoin is designed means that
updating that state requires bandwidth equal to all the transaction
volume to keep up with the changes to what set. Long story short, we get
O(n^2) scaling, which is just plain infeasible.
We have a fundamental disagreement here.
If you go back and read Satoshi's original thoughts on scaling, it is clear
that he imagined tens of thousands of mining nodes and hundreds of millions
of lightweight SPV users.
Yeah, about that...
https://blockchain.info/pools
On-topic:
This argument is quite the fallacy. The only reason we have that few
pools is because each of their miners doesn't find it feasible to mine
"on their own". if you count the individual miners on those pools you
will get to the scale Gavin was trying to point out.

Nevertheless i think that is just a minor disagreement, since tree
chains help decentralization.
Post by Peter Todd
For someone with 'Chief Scientist' as their job title, I'm surprised you
think so little of hard evidence and so much of idol worshipping.
P.S. A year or so ago you complained that if I cared so much about
decentralization, I should make P2Pool better. Your homework: What do
tree-chains and Andrew Miller's non-outsourcable puzzles(1) have to do
with that? What about the cube-square law? And why don't I think TXO
commitments solve the blocksize problem?
1) https://bitcointalk.org/index.php?topic=309073.0;all
--
000000000000000020366a15799010ae0432be831c197e06b19133028a9aa6f3
------------------------------------------------------------------------------
Learn Graph Databases - Download FREE O'Reilly Book
"Graph Databases" is the definitive new guide to graph databases and their
applications. Written by three acclaimed leaders in the field,
this first edition is now available. Download your free book today!
http://p.sf.net/sfu/13534_NeoTech
_______________________________________________
Bitcoin-development mailing list
https://lists.sourceforge.net/lists/listinfo/bitcoin-development
Troy Benjegerdes
2014-03-25 22:00:02 UTC
Permalink
Post by Ricardo Filipe
Post by Peter Todd
Post by Gavin Andresen
Post by Peter Todd
Bitcoin doesn't scale. There's a lot of issues at hand here, but the
most fundemental of them is that to create a block you need to update
the state of the UTXO set, and the way Bitcoin is designed means that
updating that state requires bandwidth equal to all the transaction
volume to keep up with the changes to what set. Long story short, we get
O(n^2) scaling, which is just plain infeasible.
We have a fundamental disagreement here.
If you go back and read Satoshi's original thoughts on scaling, it is clear
that he imagined tens of thousands of mining nodes and hundreds of millions
of lightweight SPV users.
Yeah, about that...
https://blockchain.info/pools
This argument is quite the fallacy. The only reason we have that few
pools is because each of their miners doesn't find it feasible to mine
"on their own". if you count the individual miners on those pools you
will get to the scale Gavin was trying to point out.
Nevertheless i think that is just a minor disagreement, since tree
chains help decentralization.
I think is actually a major fundamental disagreement, and opinions
tend to correlate strongly with salary considerations.

"It is difficult to get a man to understand something, when his salary
depends upon his not understanding it!" -- Upton Sinclair

Let us either agree to disagree, or get on with moderating this list
so that only sensible salaried discussions can take place.
Peter Todd
2014-03-26 10:58:02 UTC
Permalink
Post by Ricardo Filipe
Post by Peter Todd
Post by Gavin Andresen
Post by Peter Todd
Bitcoin doesn't scale. There's a lot of issues at hand here, but the
most fundemental of them is that to create a block you need to update
the state of the UTXO set, and the way Bitcoin is designed means that
updating that state requires bandwidth equal to all the transaction
volume to keep up with the changes to what set. Long story short, we get
O(n^2) scaling, which is just plain infeasible.
We have a fundamental disagreement here.
If you go back and read Satoshi's original thoughts on scaling, it is clear
that he imagined tens of thousands of mining nodes and hundreds of millions
of lightweight SPV users.
Yeah, about that...
https://blockchain.info/pools
This argument is quite the fallacy. The only reason we have that few
pools is because each of their miners doesn't find it feasible to mine
"on their own". if you count the individual miners on those pools you
will get to the scale Gavin was trying to point out.
Yeah, that's part of my fundemental disagreement with him: I draw a
sharp line between mining - the act of validating and constructing new
blocks - and hashing - the act of solving proof-of-work problems. The
latter definitely has incentives to decentralize due to simple physics:
it's cheaper per unit hashing power to get rid of a small amount of
waste heat than a large amount. The former requires a full node, and
that full node is a fixed cost overhead related to the number of
transactions per second. Any fixed cost overhead discourages
decentralization, and encourages centralization.
Post by Ricardo Filipe
Nevertheless i think that is just a minor disagreement, since tree
chains help decentralization.
Yup. Quite importantly, the model is for any one miner to be able to
fully participate at the same level as any other miner by mining some
section of the tree. As your reward is linked to blocks mined, there
will always be some level at which you are mining blocks at a reasonably
low variance and you don't need to join a pool to achieve that low
varience. Equally your resources to keep up with that part of the tree
can be made reasonably low, and that cost only grows at the log of the
total transaction volume.
--
'peter'[:-1]@petertodd.org
0000000000000000f4f5ba334791a4102917e4d3f22f6ad7f2c4f15d97307fe2
Peter Todd
2014-03-25 12:50:58 UTC
Permalink
Post by Peter Todd
Btw, any chance we could get a summary description of tree-chains
posted to bitcoin-development?
Introduction
============
BTW for those whose email clients have problems with unicode:

http://www.mail-archive.com/bitcoin-***@lists.sourceforge.net/msg04388.html

Also, I was in a bit of a rush - catching a flight - and know I should
have cited a few things, including, but not limited to, various peoples'
work on chain-to-chain transfers and SPV proofs.
--
'peter'[:-1]@petertodd.org
00000000000000005f3189269d2c39711d6a340a617267d72f95848a9ab8e7ba
Mark Friedenbach
2014-03-25 21:03:57 UTC
Permalink
I'm afraid I'm going to be the jerk that requested more details and then
only nitpicks seemingly minor points in your introduction. But its
because I need more time to digest the contents of your proposal. Until
But moving value between chains is inconvenient; right now moving
value requires trusted third parties. Two-way atomic chain transfers
does help here, but as recent discussions on the topic showed there's
all sorts of edge cases with reorganizations that are tricky to
handle; at worst they could lead to inflation.
This isn't true. The re-org issue is fairly handled in the 2-way pegging
scheme that Greg Maxwell developed and Adam Back described a week ago on
this list. Depending on the implementation it could even be configurable
by the person performing the peg too - allowing the transfer to specify
the confirmation depth required during the quieting period in order to
protect against re-orgs up to a sufficient depth. I think this is worked
out quite well with sufficient enumeration of edge cases, and I don't
think they are particularly tricky to handle or lead to money-losing
situations under the explicit security assumptions.

More importantly, to your last point there is absolutely no way this
scheme can lead to inflation. The worst that could happen is theft of
coins willingly put into the pegging pool. But in no way is it possible
to inflate the coin supply.

I will look at your proposal in more depth. But I also think you should
give 2-way pegging a fair shake as pegging to side chains and private
accounting servers may eliminate the need.
Gregory Maxwell
2014-03-25 22:34:31 UTC
Permalink
Post by Mark Friedenbach
More importantly, to your last point there is absolutely no way this
scheme can lead to inflation. The worst that could happen is theft of
coins willingly put into the pegging pool. But in no way is it possible
to inflate the coin supply.
I don't think it would be entirely unfair to describe one of the
possible ways a secondary coin becoming unbacked can play out as
inflation— after all, people have described altcoins as inflation. In
the worst case its no _worse_ inflation, I think, than an altcoin is—
however.
Post by Mark Friedenbach
I will look at your proposal in more depth. But I also think you should
give 2-way pegging a fair shake as pegging to side chains and private
accounting servers may eliminate the need.
I think that chain geometries which improve the scale/decentralization
trade-off are complementary. If PT's ideas here do amount to something
that gives better scaling without ugly compromise I believe it would
still be useful no matter how well the 2-way peg stuff works simply
because scaling and decenteralization are both good things which we
would pretty much always want more of...
Jorge Timón
2014-03-27 16:14:04 UTC
Permalink
I'll make sure I understand your proposal better before commenting
much on it, but at a first glance, I don't see how it is incompatible
with 2 way peg and merged mining itself.
Why wouldn't you want merged mining for the root of your tree?
A miner could only chose a leaf block at a time, but it could merged
mine with other leafs in other independent trees.
Anyway, I'll better comment on the 2 way peg and merged mining issues
raised so far.
Post by Gregory Maxwell
Post by Mark Friedenbach
More importantly, to your last point there is absolutely no way this
scheme can lead to inflation. The worst that could happen is theft of
coins willingly put into the pegging pool. But in no way is it possible
to inflate the coin supply.
I don't think it would be entirely unfair to describe one of the
possible ways a secondary coin becoming unbacked can play out as
inflation-- after all, people have described altcoins as inflation. In
the worst case its no _worse_ inflation, I think, than an altcoin is--
however.
I think that's an obscure corner case that is not likely going to ever
be implemented.
If you produce real inflation there will likely be a "bank run".
If you're going to implement something equivalent to demurrage you
should call it demurrage instead of inflation.
And that's only for the pegged coin in the side chain: BITCOINS IN THE
MAIN CHAIN WILL NEVER BE INFLATED USING 2P2.

So I think it's less confusing if we just say that 2-way peg can't
produce inflation in general, and leave "unless you explicitly
introduce an inflation mechanism" as a probably unnecessary
clarification.
Post by Gregory Maxwell
I see your point, but gmaxwell accurately guesses below that when I'm
talking about inflation, I'm including the inflation of the alt too.
You don't need inflation on the side chain. You don't need to create
another currency to create another chain with different and maybe
experimental features, that's the whole point.

With merged mining, you're adding up the different created seigniorage
subsidies to the same fire to share the heat.
With 2-way peg, you don't even need to create a new p2p currency with
a seigniorage to burn on hashes or be accused of "pre-mining" as the
more ecological alternative in existence.
Your chain can secure itself on fees, just like bitcoin in the future.
Merged mining will help, but it's not the panacea and you will need to
reward miners because that's what your security ultimately depends on.
This is mostly about not burning the world, it may not be as
interesting to you as improving bitcoin's scalability but you're not
doing anyone a favor by presenting both concepts as being
incompatible, not even yourself.
Post by Gregory Maxwell
With tree-chains that's particularly obvious as the scheme doesn't try
to privilege one chain over another beyond parent-child relationships.
If I understand it correctly, all the utxo nodes in the tree implement
the same rules so doesn't seem suitable to solve the same problems.
I understand that merged mining IS NOT a solution to scalability on
its own, having 10 independent 1MB blocks is no worse than 1 10MB
block in terms of performance vs centralization.
But maybe it's possible to have a 10 GB sharded side-chain (your
proposal) that it's merged mined with the main chain and where the
currency of the side-chain comes from.
So merged mining could help solve the scalability problem indirectly.
And 2-way peg could be a useful previous step for your proposal to be
deployed "on production", with real bitcoins without forcing all
bitcoin users to take the associated risks, only the people who opt
in.
Post by Gregory Maxwell
Incidentally, I understand that the pegged chains are meant to be
merge-mined.
2 way peg doesn't require merged mining but it is assumed that merged
mining will be used since it provides more security than independent
mining.
I thought you agreed with this and your claim was just that merged
mining is less secure than "embedded consensus", something I have
never denied, my complain against "embedded consensus" is that it
doesn't seem to scale (with Bitcoin as it is today) and can't offer
many features that a hardfork merged mined chain could offer (like
those explained on our freimarkets proposal).
But since you're implying again that "merged mining is superior to
independent mining" is generally false, I invite you again to
dismantle my example

http://sourceforge.net/p/bitcoin/mailman/message/31806950/

or to prove your hypothesis that "is free to attack merged mining
chains" by attacking namecoin for free. Either one will serve, my
you're not responding to any of the suggestions.
Instead, you're saying that "people defending merged mining assume
that attackers are economically rational". I think you're referring to
me and it's false.
Of course the attacker doesn't need to be economically rational. For
some unknown reason he's attacking a chain, without questioning the
rationality of the attack, I just sum costs, including opportunity
costs, because costs are all what proof of work security is about.
Please, do one of the two before continuing your merged mining
defamation campaign.
Post by Gregory Maxwell
merge-mined. To me this seems problematic and cheap to attack. Consider
a merge-mined zerocoin sidechain: Can you profit from depositing some
coins, taking them out again, then reorging the zerocoin chain to undo
that withdrawl on the zerocoin side, and performing it all over again?
That's what the quieting periods are for.
After the widrawal, the coins are blocked until they reach maturity or
someone else provides a reorg proof invalidating the withdrawal.
Post by Gregory Maxwell
It'd be easy to drain the pegging pool that way, and with merge-mining
there's no inherent cost to you to do so. Not unique to zerocoin either
of course, just in that case who actually double-spent is unknowable.
We could talk about this in the 2-way peg thread, but anyway...
Let's say 80% of bitcoin miners also mine Zerocoin.
Let's say zerocoin's reward ZR is 1% of Bitcoin's Reward BR
Let's say "megahash" hashes 40% of Bitcoin's mining and is our attacker.
Previously megahash rented its hardware for 0.41 BR for each GHash/s,
because that was the market price at the time.
Now it will mine bitcoin and attack zerocoin, so it will recover 0.4
BR, leaving the costs of the attack at 0.01 BR per GHash/s (assuming
it doesn't rent additional hardware, which could also do).
Since it controls 40% of Bitcoins hashing, it controls approximately
50% of zerocoin hashing.
So megahash tries to withdraw AR coins (attack reward) and then double spend.
Troy Benjegerdes
2014-03-28 15:10:30 UTC
Permalink
Anyway the particular situation in which a single entity controls 40%
of the hashing power should be rare. That's potentially dangerous for
bitcoin and although changing the hashing algorithm would be painful
and risky, I would be terribly scared of that happening if I was that
entity. Letting my percentage of hash rate dilute as others grow would
definitely be part of my plan.
I think *your* plan is an ecologically and socially rational plan. My
observations of irrational responses on this list lead me to believe
there is a single entity (which may be a cartel) which *effectively*
controls between 30% and 50% of the sha-256 hashing power and is quite
terrified of any alternative, and attempts to purchase, consume, or
eliminate any entities that might dilute it's controlled hash rate or
pose a risk of switching to a new algorithm.

We must have a system in which 1 to 10% of the hashrate can provide a
reasonable check-and-balance and competitive pressure to 90% of the
hash rate, or it's going to be fundamentally unstable, and we will
just re-create 'to big to fail' all over again.
Although this is again completely orthogonal to the merged mining or
not discussion, hashing algorithms are often mixed in the discussions
against merged mining. If you had to introduce that hashing algorithm
hardfork change you will probably chose something with similar
properties than those of SHA256, like being easy to implement
specialized hardware for it. You could even chose a memory-hard
algorithm if you want to promote ASIC production centralization, but
you can't chose an "anti-ASIC" algorithm because those don't exist.
It is well known that any information machine that can be built with
software can also be built with specialized hardware and viceversa.
Sadly that kind of fallacy is often used to justify the ecological
crime that starting a new chain with no plans of doing merged mining
represents.
You speak of ecological crime without proposing any mechanism in which
the ecologically correct thing is also the economically rational thing.

If I could get real-time MISO market pricing for wind energy, I could
do this Loading Image... and run a mining farm
on my farm.

I would like to propose we collaborate on developing secure mechanism
to audit energy sources for miners on a new chain called 'Ecocoin' in
which the block reward is proportional to how much energy the owner
of the newly generated block reward personally harvested from renewable
sources.

The reward curve will have to be calibrated and adjusted to minimize
the over all costs and fraud risk of auditing the energy input sources.
--
----------------------------------------------------------------------------
Troy Benjegerdes 'da hozer' ***@hozed.org
7 elements earth::water::air::fire::mind::spirit::soul grid.coop

Never pick a fight with someone who buys ink by the barrel,
nor try buy a hacker who makes money by the megahash
Tier Nolan
2014-04-17 21:41:55 UTC
Permalink
How does this system handle problems with the lower chains after they have
been "locked-in"?

The rule is that if a block in the child chain is pointed to by its parent,
then it effectively has infinite POW?

The point of the system is that a node monitoring the parent chain only has
to watch the header chain for its 2 children.

A parent block header could point to an invalid block in one of the child
chains. That parent block could end up built on top of before the problem
was discovered.

This would mean that a child chain problem could cause a roll-back of a
parent chain. This violates the principle that parents are dominant over
child chains.

Alternatively, the child chain could discard the infinite POW blocks, since
they are illegal.

P1 -> C1
P2 -> ---
P3 -> C3
P4 -> C5

It turns out C4 (or C5) was an invalid block

P5 -> C4'
P6 -> ---
P7 -> C8'

This is a valid sequence. Once P7 points at C8, the alternative chain
displaces C5.

This displacement could require a compact fraud proof to show that C4 was
an illegal block and that C5 was built on it.

This shouldn't happen if the miner was actually watching the log(N) chains,
but can't be guaranteed against.

I wonder if the proof of stake "nothing is at stake" principle applies
here. Miners aren't putting anything at stake by merge mining the lower
chains.

At minimum, they should get tx-fees for the lower chains that they merge
mine. The rule could require that the minting reward is divided over the
merge mined chains.
Gregory Sanders
2014-08-03 17:23:07 UTC
Permalink
Peter I was curious if you could detail what specific concerns Adam Back
brought up with the current iteration of the tree-chains idea? It's been
alluded to a few times yet I have not read the specific problem.

Greg

Luke-Jr
2014-03-24 21:17:13 UTC
Permalink
Post by Peter Todd
To make a long story short, it was soon suggested that Bitcoin Core be
forked - the software, not the protocol - and miners encouraged to
support it.
There's been at least one public miner-oriented fork of Bitcoin Core since 0.7
or earlier. Miners still running vanilla Bitcoin Core are neglecting their
duty to the community. That being said, the more forks, the better for
decentralisation.

Luke
Peter Todd
2014-03-26 10:48:52 UTC
Permalink
Post by Mark Friedenbach
But moving value between chains is inconvenient; right now moving
value requires trusted third parties. Two-way atomic chain transfers
does help here, but as recent discussions on the topic showed there's
all sorts of edge cases with reorganizations that are tricky to
handle; at worst they could lead to inflation.
This isn't true. The re-org issue is fairly handled in the 2-way pegging
scheme that Greg Maxwell developed and Adam Back described a week ago on
this list. Depending on the implementation it could even be configurable
by the person performing the peg too - allowing the transfer to specify
the confirmation depth required during the quieting period in order to
protect against re-orgs up to a sufficient depth. I think this is worked
out quite well with sufficient enumeration of edge cases, and I don't
think they are particularly tricky to handle or lead to money-losing
situations under the explicit security assumptions.
More importantly, to your last point there is absolutely no way this
scheme can lead to inflation. The worst that could happen is theft of
coins willingly put into the pegging pool. But in no way is it possible
to inflate the coin supply.
I see your point, but gmaxwell accurately guesses below that when I'm
talking about inflation, I'm including the inflation of the alt too.
With tree-chains that's particularly obvious as the scheme doesn't try
to privilege one chain over another beyond parent-child relationships.


Incidentally, I understand that the pegged chains are meant to be
merge-mined. To me this seems problematic and cheap to attack. Consider
a merge-mined zerocoin sidechain: Can you profit from depositing some
coins, taking them out again, then reorging the zerocoin chain to undo
that withdrawl on the zerocoin side, and performing it all over again?
It'd be easy to drain the pegging pool that way, and with merge-mining
there's no inherent cost to you to do so. Not unique to zerocoin either
of course, just in that case who actually double-spent is unknowable.
Post by Mark Friedenbach
I will look at your proposal in more depth. But I also think you should
give 2-way pegging a fair shake as pegging to side chains and private
accounting servers may eliminate the need.
Well I'll certainly raid 2-way pegging for ideas. :) I think the big
difference between the two is how I'd like to see tree-chains reduce
dependence on miner validation - ideally miners wouldn't validate at all
if the efficiency can be regained with ZK-SNARKS or something. Dropping
validation from mining could also avoid the problem of how in Bitcoin
there is no explicit mechanism that actually forces miners to validate
the chain. Not unlike gmaxwell's "firedrill" ideas, you would be able to
"firedrill" clients at any point by just mining some invalid garage.

(not to say miners would certainly not do validation - you still want to
be able to pay them transaction fees, but in that case they're doing the
validation only for themselves)
Post by Mark Friedenbach
More importantly, to your last point there is absolutely no way this
scheme can lead to inflation. The worst that could happen is theft of
coins willingly put into the pegging pool. But in no way is it possible
to inflate the coin supply.
I don't think it would be entirely unfair to describe one of the
possible ways a secondary coin becoming unbacked can play out as
inflation— after all, people have described altcoins as inflation. In
the worst case its no _worse_ inflation, I think, than an altcoin is—
however.
Yup, and in the tree-chains model, every single chain is, from that
perspective, an altcoin.
--
'peter'[:-1]@petertodd.org
0000000000000000f4f5ba334791a4102917e4d3f22f6ad7f2c4f15d97307fe2
Loading...