Discussion:
[bitcoin-dev] Scaling by Partitioning
Akiva Lichtner via bitcoin-dev
2015-12-08 16:27:18 UTC
Permalink
Hello,

I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.

The idea is to run more than one simultaneous chain, each chain defeating
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel chains,
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".

The number of chains could increase automatically over time based on the
moving average of transaction volume.

Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.

I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.

Best wishes for continued success to the project.

Regards,
Akiva

P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
Bryan Bishop via bitcoin-dev
2015-12-08 16:45:32 UTC
Permalink
On Tue, Dec 8, 2015 at 10:27 AM, Akiva Lichtner via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
and miners would have to round-robin through partitions
At first glance this proposal seems most similar to the sharding proposals:

http://diyhpl.us/wiki/transcripts/scalingbitcoin/sharding-the-blockchain/
https://github.com/vbuterin/scalability_paper/blob/master/scalability.pdf
https://www.reddit.com/r/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/cxbamhn
http://eprint.iacr.org/2015/1168.pdf (committee approach)
Post by Akiva Lichtner via bitcoin-dev
but clients would have to send more than one message in order to spend money
Offloading work to the client for spends has in the past been a
well-received concept, such as the linearized coin history idea.

- Bryan
http://heybryan.org/
1 512 203 0507
Akiva Lichtner via bitcoin-dev
2015-12-08 18:30:01 UTC
Permalink
Thanks for your response and links.

I think the difference is that those proposals all shard the mining work,
whereas what I wrote in my post shards the coin itself. In other words
different parts of the coin space are forever segregated, never ending up
in the same block. It's a big difference conceptually because I could spend
money and a fraction of it makes it into a block in ten minutes and the
rest two hours later.

And I think that's where the potential for the scalability comes in. I am
not really scaling Bitcoin's present requirements, I am relaxing the
requirements in a way that leaves the users and the miners happy, but that
could provoke resistance by the part of of all of us that doesn't want
digital cash as much as it wants to make history.

All the best,
Akiva

P.S. Thanks for pointing out that I hit "reply" instead of "reply all"
earlier ...
Post by Bryan Bishop via bitcoin-dev
On Tue, Dec 8, 2015 at 10:27 AM, Akiva Lichtner via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
and miners would have to round-robin through partitions
http://diyhpl.us/wiki/transcripts/scalingbitcoin/sharding-the-blockchain/
https://github.com/vbuterin/scalability_paper/blob/master/scalability.pdf
https://www.reddit.com/r/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/cxbamhn
http://eprint.iacr.org/2015/1168.pdf (committee approach)
Post by Akiva Lichtner via bitcoin-dev
but clients would have to send more than one message in order to spend
money
Offloading work to the client for spends has in the past been a
well-received concept, such as the linearized coin history idea.
- Bryan
http://heybryan.org/
1 512 203 0507
Patrick Strateman via bitcoin-dev
2015-12-08 20:50:08 UTC
Permalink
Payment recipients would need to operate a daemon for each chain, thus
guaranteeing no scaling advantage.

(There are other issues, but I believe that to be enough of a show
stopper not to continue).
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty
years' experience in development. I have some experience with process
groups and ordering protocols too. I think I understand Satoshi's
paper but I admit I have not read the source code.
The idea is to run more than one simultaneous chain, each chain
defeating double spending on only part of the coin. The coin would be
partitioned by radix (or modulus, not sure what to call it.) For
example in order to multiply throughput by a factor of ten you could
run ten parallel chains, one would work on coin that ends in "0", one
on coin that ends in "1", and so on up to "9".
The number of chains could increase automatically over time based on
the moving average of transaction volume.
Blocks would have to contain the number of the partition they belong
to, and miners would have to round-robin through partitions so that an
attacker would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have
to send more than one message in order to spend money. Client messages
will need to enumerate coin using some sort of compression, to save
space. This seems okay to me since often in computing client software
does have to break things up in equal parts (e.g. memory pages, file
system blocks,) and the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Akiva Lichtner via bitcoin-dev
2015-12-08 21:23:12 UTC
Permalink
It's true that miners would have to be prepared to work on any partition. I
don't see where the number affects defeating double spending, what matters
is the nonce in the block that keeps the next successful miner random.

I expect that the number of miners would be ten times larger as well, so an
attacker would have no advantage working on one partition.

On Tue, Dec 8, 2015 at 3:50 PM, Patrick Strateman via bitcoin-dev <
Post by Patrick Strateman via bitcoin-dev
Payment recipients would need to operate a daemon for each chain, thus
guaranteeing no scaling advantage.
(There are other issues, but I believe that to be enough of a show stopper
not to continue).
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain defeating
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel chains,
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Patrick Strateman via bitcoin-dev
2015-12-08 21:29:13 UTC
Permalink
If partition is selected from a random key (the hash of the output for
example) then payment recipients would need to operate a full node on
each of the chains.

What's the point of partitioning if virtually everybody needs to operate
each partition?

The mining aspect has it's own set of issues, but I'm not going to get
into those.
Post by Akiva Lichtner via bitcoin-dev
It's true that miners would have to be prepared to work on any
partition. I don't see where the number affects defeating double
spending, what matters is the nonce in the block that keeps the next
successful miner random.
I expect that the number of miners would be ten times larger as well,
so an attacker would have no advantage working on one partition.
On Tue, Dec 8, 2015 at 3:50 PM, Patrick Strateman via bitcoin-dev
Payment recipients would need to operate a daemon for each chain,
thus guaranteeing no scaling advantage.
(There are other issues, but I believe that to be enough of a show
stopper not to continue).
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin.
As a brief introduction: I work in the payment industry and I
have twenty years' experience in development. I have some
experience with process groups and ordering protocols too. I
think I understand Satoshi's paper but I admit I have not read
the source code.
The idea is to run more than one simultaneous chain, each chain
defeating double spending on only part of the coin. The coin
would be partitioned by radix (or modulus, not sure what to call
it.) For example in order to multiply throughput by a factor of
ten you could run ten parallel chains, one would work on coin
that ends in "0", one on coin that ends in "1", and so on up to "9".
The number of chains could increase automatically over time based
on the moving average of transaction volume.
Blocks would have to contain the number of the partition they
belong to, and miners would have to round-robin through
partitions so that an attacker would not have an unfair advantage
working on just one partition.
I don't think there is much impact to miners, but clients would
have to send more than one message in order to spend money.
Client messages will need to enumerate coin using some sort of
compression, to save space. This seems okay to me since often in
computing client software does have to break things up in equal
parts (e.g. memory pages, file system blocks,) and the client
software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Akiva Lichtner via bitcoin-dev
2015-12-08 21:41:07 UTC
Permalink
If the system is modified to scale up that means the number of transactions
is going up. That means the number of miners can also go up, and so will
the portion of malicious nodes. At least this seems reasonable. The problem
with partitions is that an attacker can focus on one partition. However
because the number of miners also increases any attacks will fail as long
as the miners are willing to work on any partition, which is easily
accomplished by round-robin.

Since there are N times more miners each miner still does the same amount
of work. The system scales by partitioning the money supply and increasing
the number of miners.



On Tue, Dec 8, 2015 at 4:29 PM, Patrick Strateman via bitcoin-dev <
Post by Patrick Strateman via bitcoin-dev
If partition is selected from a random key (the hash of the output for
example) then payment recipients would need to operate a full node on each
of the chains.
What's the point of partitioning if virtually everybody needs to operate
each partition?
The mining aspect has it's own set of issues, but I'm not going to get
into those.
It's true that miners would have to be prepared to work on any partition.
I don't see where the number affects defeating double spending, what
matters is the nonce in the block that keeps the next successful miner
random.
I expect that the number of miners would be ten times larger as well, so
an attacker would have no advantage working on one partition.
On Tue, Dec 8, 2015 at 3:50 PM, Patrick Strateman via bitcoin-dev <
Post by Patrick Strateman via bitcoin-dev
Payment recipients would need to operate a daemon for each chain, thus
guaranteeing no scaling advantage.
(There are other issues, but I believe that to be enough of a show
stopper not to continue).
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain defeating
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel chains,
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Loi Luu via bitcoin-dev
2015-12-09 06:30:24 UTC
Permalink
Dear Akiva,

Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).

Before SCP, we had been thinking hard about how to do sharding efficiently
without degrading any security guarantee. A simple solution which splits
the coins, or TXs in to several partitions will just not work. You have to
answer more questions to have a good solutions. For example, I wonder in
your proposal, if a transaction spends a "coin" that ends in "1" and
creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?

The problem with other proposals, and probably yours as well, that we see
is that the amount of data that you need to broadcast immediately to the
network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the network.

Clearly we are able to localize the bandwidth used with our SCP protocol.
The cost is that now recipients need to themselves verify whether a
transaction is double spending. However, we think that it is a reasonable
tradeoff, given the potential scalability that SCP can provides.

Thanks,
Loi Luu.

On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain defeating
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel chains,
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Akiva Lichtner via bitcoin-dev
2015-12-09 18:26:06 UTC
Permalink
Thanks for giving serious consideration to my post.

With regard to your question "if a transaction spends a "coin" that
ends in "1" and creates a new coin that ends in "1", which partition
should process the transaction?", I would answer that only one
partition is involved. In other words, there are N independent block
chains that never cross paths.

With regard to your question "what is the prior data needed to
validate that kind of TXs?" I do not understand what this means. If
you can dumb it down a bit that would be good because there could be
some interesting concern in this question.

Since partitions are completely segregated, there is no need for a
node to work on multiple partitions simultaneously. For attacks to be
defeated a node needs to be able to work on multiple partitions in
turn, not at the same time. The reason is because if the computing
power of the good-faith nodes is unbalanced this gives attackers an
unfair advantage.

On 12/9/15, Loi Luu via bitcoin-dev
Post by Loi Luu via bitcoin-dev
Dear Akiva,
Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).
Before SCP, we had been thinking hard about how to do sharding efficiently
without degrading any security guarantee. A simple solution which splits
the coins, or TXs in to several partitions will just not work. You have to
answer more questions to have a good solutions. For example, I wonder in
your proposal, if a transaction spends a "coin" that ends in "1" and
creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?
The problem with other proposals, and probably yours as well, that we see
is that the amount of data that you need to broadcast immediately to the
network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the network.
Clearly we are able to localize the bandwidth used with our SCP protocol.
The cost is that now recipients need to themselves verify whether a
transaction is double spending. However, we think that it is a reasonable
tradeoff, given the potential scalability that SCP can provides.
Thanks,
Loi Luu.
On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain defeating
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel chains,
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Loi Luu via bitcoin-dev
2015-12-09 21:16:00 UTC
Permalink
I guess the most basic question is how do you define a coin here?

Thanks,
Loi Luu
Post by Akiva Lichtner via bitcoin-dev
Thanks for giving serious consideration to my post.
With regard to your question "if a transaction spends a "coin" that
ends in "1" and creates a new coin that ends in "1", which partition
should process the transaction?", I would answer that only one
partition is involved. In other words, there are N independent block
chains that never cross paths.
With regard to your question "what is the prior data needed to
validate that kind of TXs?" I do not understand what this means. If
you can dumb it down a bit that would be good because there could be
some interesting concern in this question.
Since partitions are completely segregated, there is no need for a
node to work on multiple partitions simultaneously. For attacks to be
defeated a node needs to be able to work on multiple partitions in
turn, not at the same time. The reason is because if the computing
power of the good-faith nodes is unbalanced this gives attackers an
unfair advantage.
On 12/9/15, Loi Luu via bitcoin-dev
Post by Loi Luu via bitcoin-dev
Dear Akiva,
Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).
Before SCP, we had been thinking hard about how to do sharding
efficiently
Post by Loi Luu via bitcoin-dev
without degrading any security guarantee. A simple solution which splits
the coins, or TXs in to several partitions will just not work. You have
to
Post by Loi Luu via bitcoin-dev
answer more questions to have a good solutions. For example, I wonder in
your proposal, if a transaction spends a "coin" that ends in "1" and
creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?
The problem with other proposals, and probably yours as well, that we
see
Post by Loi Luu via bitcoin-dev
is that the amount of data that you need to broadcast immediately to the
network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the
network.
Post by Loi Luu via bitcoin-dev
Clearly we are able to localize the bandwidth used with our SCP protocol.
The cost is that now recipients need to themselves verify whether a
transaction is double spending. However, we think that it is a reasonable
tradeoff, given the potential scalability that SCP can provides.
Thanks,
Loi Luu.
On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups
and
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain
defeating
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel
chains,
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Akiva Lichtner via bitcoin-dev
2015-12-10 04:04:17 UTC
Permalink
It's an interval (a,b) where a, b are between 0 and 21*10^6*10^8 .

Somebody pointed out that this is not easily accomplished using the current
code because there are no coin ids.
Post by Loi Luu via bitcoin-dev
I guess the most basic question is how do you define a coin here?
Thanks,
Loi Luu
Post by Akiva Lichtner via bitcoin-dev
Thanks for giving serious consideration to my post.
With regard to your question "if a transaction spends a "coin" that
ends in "1" and creates a new coin that ends in "1", which partition
should process the transaction?", I would answer that only one
partition is involved. In other words, there are N independent block
chains that never cross paths.
With regard to your question "what is the prior data needed to
validate that kind of TXs?" I do not understand what this means. If
you can dumb it down a bit that would be good because there could be
some interesting concern in this question.
Since partitions are completely segregated, there is no need for a
node to work on multiple partitions simultaneously. For attacks to be
defeated a node needs to be able to work on multiple partitions in
turn, not at the same time. The reason is because if the computing
power of the good-faith nodes is unbalanced this gives attackers an
unfair advantage.
On 12/9/15, Loi Luu via bitcoin-dev
Post by Loi Luu via bitcoin-dev
Dear Akiva,
Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).
Before SCP, we had been thinking hard about how to do sharding
efficiently
Post by Loi Luu via bitcoin-dev
without degrading any security guarantee. A simple solution which splits
the coins, or TXs in to several partitions will just not work. You have
to
Post by Loi Luu via bitcoin-dev
answer more questions to have a good solutions. For example, I wonder in
your proposal, if a transaction spends a "coin" that ends in "1" and
creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?
The problem with other proposals, and probably yours as well, that we
see
Post by Loi Luu via bitcoin-dev
is that the amount of data that you need to broadcast immediately to the
network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the
network.
Post by Loi Luu via bitcoin-dev
Clearly we are able to localize the bandwidth used with our SCP
protocol.
Post by Loi Luu via bitcoin-dev
The cost is that now recipients need to themselves verify whether a
transaction is double spending. However, we think that it is a
reasonable
Post by Loi Luu via bitcoin-dev
tradeoff, given the potential scalability that SCP can provides.
Thanks,
Loi Luu.
On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups
and
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
ordering protocols too. I think I understand Satoshi's paper but I
admit
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain
defeating
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel
chains,
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on
the
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong
to,
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages
will
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT
MATH"
Post by Loi Luu via bitcoin-dev
Post by Akiva Lichtner via bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Andrew via bitcoin-dev
2015-12-09 22:35:07 UTC
Permalink
Hi Akiva

I sketched out a similar proposal here:
https://bitcointalk.org/index.php?topic=1083345.0

It's good to see people talking about this :). I'm not quite convinced with
segregated witness, as it might mess up some things, but will take a closer
look.
On Dec 9, 2015 7:32 AM, "Loi Luu via bitcoin-dev" <
Post by Loi Luu via bitcoin-dev
Dear Akiva,
Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).
Before SCP, we had been thinking hard about how to do sharding efficiently
without degrading any security guarantee. A simple solution which splits
the coins, or TXs in to several partitions will just not work. You have to
answer more questions to have a good solutions. For example, I wonder in
your proposal, if a transaction spends a "coin" that ends in "1" and
creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?
The problem with other proposals, and probably yours as well, that we see
is that the amount of data that you need to broadcast immediately to the
network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the network.
Clearly we are able to localize the bandwidth used with our SCP protocol.
The cost is that now recipients need to themselves verify whether a
transaction is double spending. However, we think that it is a reasonable
tradeoff, given the potential scalability that SCP can provides.
Thanks,
Loi Luu.
On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain defeating
double spending on only part of the coin. The coin would be partitioned by
radix (or modulus, not sure what to call it.) For example in order to
multiply throughput by a factor of ten you could run ten parallel chains,
one would work on coin that ends in "0", one on coin that ends in "1", and
so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Akiva Lichtner via bitcoin-dev
2015-12-10 03:58:10 UTC
Permalink
Hi Andrew,

What you suggested is much more sophisticated than what I suggested. I am
strictly talking about independent chains - that's all.

I am struck by the fact that the topic of "scaling bitcoin" seems to be a
mix of different problems, and when people are really trying to solve
different problems, or arguing about applying the same solution in
different settings, it's easy to argue back and forth endlessly. Your post
talks about validating transactions without seeing all transactions. This
is a different problem than what I am addressing. My view of Bitcoin is
colored by my experience with process groups and total ordering. I view
Bitcoin as a timestamp service on all transactions, a total order. A total
order is difficult to scale. Period.

I am just addressing how to change the system so that blocks can be
generated faster if a) the transaction volume increases and b) you are
willing to have more miners. But you are also talking about transaction
verification and making sure that you don't need to verify everything. I
think these are two problems that should have two different names.

Correct me if I am wrong, but the dream of a virtual currency where
everybody is equal and runs the client on their mobile device went out the
window long ago. I think that went out with the special mining hardware. If
my organization had to accept bitcoin payments I would assume that we'll
need a small server farm for transaction verification, and that we would
see all the transactions. Figure 10,000 transactions per second, like VISA.
As far as small organizations or private individuals are concerned, I think
it would be entirely okay for a guy on a smartphone to delegate
verification to a trusted party, as long as the trust chain stops there and
there is plenty of choice.

I am guessing the trustless virtual currency police would get pretty upset
about such a pragmatic approach, but it's not really a choice, the failure
to scale has already occurred. All things considered I think that Bitcoin
will only scale when pragmatic considerations take center stage and the
academic goals take a lower priority. I would think companies would make a
good living out of running trusted verification services.

Once again, it doesn't mean that there is a bank, the network still allows
malicious nodes, but there can be pockets of trust. This is only natural,
most people trust at least one other person, so it's not that weird.

Akiva
Bryan Bishop via bitcoin-dev
2015-12-10 04:31:42 UTC
Permalink
Post by Akiva Lichtner via bitcoin-dev
Correct me if I am wrong, but the dream of a virtual currency where
everybody is equal and runs the client on their mobile device went out the
window long ago. I think that went out with the special mining hardware. If
Mining equipment isn't for transaction verification. The mining
equipment is used to work on Proof-of-Work. Thanks.
Post by Akiva Lichtner via bitcoin-dev
my organization had to accept bitcoin payments I would assume that we'll
need a small server farm for transaction verification, and that we would see
Unfortunately Bitcoin does not work like those centralized systems; it
should not be surprising that a system focused so much on
decentralized and independent verification would have developers
working on so many non-bandwidth scaling solutions. These other
proposals seek to preserve existing properties of Bitcoin (such as
cheap verification, low-bandwidth) while also increasing the amount of
activity that can enjoy the decentralized fruits of Proof-of-Work
labor. But not helpful to assume this can only look like Visa or any
database on a cluster etc...
Post by Akiva Lichtner via bitcoin-dev
would be entirely okay for a guy on a smartphone to delegate verification to
a trusted party, as long as the trust chain stops there and there is plenty
of choice.
I don't suppose I could tempt you with probabilistically checkable
proofs, could I? These verify in milliseconds, grow sublinear in size
of the total data, but have no near-term proposal available yet.
Post by Akiva Lichtner via bitcoin-dev
I am guessing the trustless virtual currency police would get pretty upset
about such a pragmatic approach, but it's not really a choice, the failure
to scale has already occurred. All things considered I think that Bitcoin
Perhaps instead of failure-to-scale you mean "failure to apply
traditional scaling has already failed", which shouldn't be so
surprising given the different security model that Bitcoin operates
on.
Post by Akiva Lichtner via bitcoin-dev
most people trust at least one other person, so it's not that weird.
see the following recent text,
"""
Bitcoin is P2P electronic cash that is valuable over legacy systems
because of the monetary autonomy it brings to its users through
decentralization. Bitcoin seeks to address the root problem with
conventional currency: all the trust that's required to make it work--

-- Not that justified trust is a bad thing, but trust makes systems
brittle, opaque, and costly to operate. Trust failures result in systemic
collapses, trust curation creates inequality and monopoly lock-in, and
naturally arising trust choke-points can be abused to deny access to
due process. Through the use of cryptographic proof and decentralized
networks Bitcoin minimizes and replaces these trust costs.

With the available technology, there are fundamental trade-offs between
scale and decentralization. If the system is too costly people will be
forced to trust third parties rather than independently enforcing the
system's rules. If the Bitcoin blockchain’s resource usage, relative
to the available technology, is too great, Bitcoin loses its competitive
advantages compared to legacy systems because validation will be too
costly (pricing out many users), forcing trust back into the system.
If capacity is too low and our methods of transacting too inefficient,
access to the chain for dispute resolution will be too costly, again
pushing trust back into the system.
"""

- Bryan
http://heybryan.org/
1 512 203 0507
Dave Scotese via bitcoin-dev
2015-12-10 04:08:04 UTC
Permalink
If we partition the work using bits from the TxID (once it is no longer
malleable) or even bits from whatever definition we use for "coin," then
every transaction may have to use all the other partitions to verify that
the incoming coin is good.

If all partitions are involved in validating and storing every transaction,
then we may be doing more work in total, but any one node will only have to
do (and store) a fraction of what it is now. We would want the current
situation to be identical to one in which all the participants are handling
all the partitions. So how can we break up the work so that any
participant can handle whatever fraction of the work he or she wants? One
idea is to use the last bits of the address that will receive the subsidy
and fees. You solve the block for your partition by determining that all
transactions in the block are valid against the subset of blocks whose
hashes end with the same bits.

This solution is broadcast in the hope that others will start attempting to
validate that same block on their own partition. If they are mining the
same partition, they simply change their subsidy address to work on a
different partition. Each time a new-but-not-last partition is solved,
everyone working on the block adds the new solver's output address to their
generation transaction with the appropriate fraction of the
reward-plus-subsidy. In this way, several miners contribute to the
solution of a single block and need only store those blocks that match the
partitions they want to work on.

Suppose we use eight bits so that there are 256 partitions and a miner
wishes to do about 1/5 of the work. That would be 51 partitions. This is
signaled in the generation transaction, where the bit-pattern of the last
byte of the public key identifies the first partition, and the proportion
of the total reward for the block (51/256) indicates how many partitions a
solution will cover.

Suppose that the last byte of the subsidy address is 0xF0. This means
there are only 16 partitions left, so we define partition selection to wrap
around. This 51/256 miner must cover partitions 0xF0 - 0xFF and 0x00 -
0x23. In this way, all mining to date has covered all partitions.

The number of bits to be used might be able to be abstracted out to a
certain level. Perhaps a miner can indicate how many bits B the
partitioning should use in the CoinBase. The blocks for which a partition
miner claims responsibility are all those with a bit pattern of length B at
the end of their hash matching the the bits at the end of the first
output's public key in the generation transaction, as well as those blocks
with hashes for which the last B bits match any of the next N bit patterns
where for the largest integer N for which the claimed output is not less
than (subsidy+fees)*(N/(2^B)).

If you only store and validate against one partition, and that partition
has a solution already, then you would start working on the next block
(once you've validated the current one against your subset of the
blockchain). You could even broadcast a solution for that next block
before the previous block is fully solved, thus claiming a piece of the
next block reward (assuming the current block is valid on all partitions).

It seems that a miner who covers only one partition will be at a serious
disadvantage, but as the rate of incoming transactions increases, the
fraction of time he must spend validating (being about half of all other
miners who cover just one more partition) makes up for this disadvantage
somewhat. He is a "spry" miner and therefore wins more rewards during
times of very dense transaction volume. If we wish to encourage miners to
work on smaller partitions, we can provide a difficulty break for smaller
fractions of the work. In fact, the difficulty can be adjusted down for
the first solution, and then slowly back up to full for the last
partition(s).

This proposal has the added benefit of encouraging the assembly of blocks
by miners who work on single partitions to get them out there with a
one-partition solution.

On Wed, Dec 9, 2015 at 2:35 PM, Andrew via bitcoin-dev <
Post by Andrew via bitcoin-dev
Hi Akiva
https://bitcointalk.org/index.php?topic=1083345.0
It's good to see people talking about this :). I'm not quite convinced
with segregated witness, as it might mess up some things, but will take a
closer look.
On Dec 9, 2015 7:32 AM, "Loi Luu via bitcoin-dev" <
Post by Loi Luu via bitcoin-dev
Dear Akiva,
Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).
Before SCP, we had been thinking hard about how to do sharding
efficiently without degrading any security guarantee. A simple solution
which splits the coins, or TXs in to several partitions will just not work.
You have to answer more questions to have a good solutions. For example, I
wonder in your proposal, if a transaction spends a "coin" that ends in "1"
and creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?
The problem with other proposals, and probably yours as well, that we
see is that the amount of data that you need to broadcast immediately to
the network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the network.
Clearly we are able to localize the bandwidth used with our SCP protocol.
The cost is that now recipients need to themselves verify whether a
transaction is double spending. However, we think that it is a reasonable
tradeoff, given the potential scalability that SCP can provides.
Thanks,
Loi Luu.
On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain
defeating double spending on only part of the coin. The coin would be
partitioned by radix (or modulus, not sure what to call it.) For example in
order to multiply throughput by a factor of ten you could run ten parallel
chains, one would work on coin that ends in "0", one on coin that ends in
"1", and so on up to "9".
The number of chains could increase automatically over time based on the
moving average of transaction volume.
Blocks would have to contain the number of the partition they belong to,
and miners would have to round-robin through partitions so that an attacker
would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
I like to provide some work at no charge to prove my value. Do you need a
techie?
I own Litmocracy <http://www.litmocracy.com> and Meme Racing
<http://www.memeracing.net> (in alpha).
I'm the webmaster for The Voluntaryist <http://www.voluntaryist.com> which
now accepts Bitcoin.
I also code for The Dollar Vigilante <http://dollarvigilante.com/>.
"He ought to find it more profitable to play by the rules" - Satoshi
Nakamoto
Dave Scotese via bitcoin-dev
2015-12-10 04:14:17 UTC
Permalink
Edit:
"... as well as those blocks with hashes for which the last B bits match
any of the next N bit patterns where *N is largest* integer for which the
claimed output is not *greater* than (subsidy+fees)*(N/(2^B)).
Post by Dave Scotese via bitcoin-dev
If we partition the work using bits from the TxID (once it is no longer
malleable) or even bits from whatever definition we use for "coin," then
every transaction may have to use all the other partitions to verify that
the incoming coin is good.
If all partitions are involved in validating and storing every
transaction, then we may be doing more work in total, but any one node will
only have to do (and store) a fraction of what it is now. We would want
the current situation to be identical to one in which all the participants
are handling all the partitions. So how can we break up the work so that
any participant can handle whatever fraction of the work he or she wants?
One idea is to use the last bits of the address that will receive the
subsidy and fees. You solve the block for your partition by determining
that all transactions in the block are valid against the subset of blocks
whose hashes end with the same bits.
This solution is broadcast in the hope that others will start attempting
to validate that same block on their own partition. If they are mining the
same partition, they simply change their subsidy address to work on a
different partition. Each time a new-but-not-last partition is solved,
everyone working on the block adds the new solver's output address to their
generation transaction with the appropriate fraction of the
reward-plus-subsidy. In this way, several miners contribute to the
solution of a single block and need only store those blocks that match the
partitions they want to work on.
Suppose we use eight bits so that there are 256 partitions and a miner
wishes to do about 1/5 of the work. That would be 51 partitions. This is
signaled in the generation transaction, where the bit-pattern of the last
byte of the public key identifies the first partition, and the proportion
of the total reward for the block (51/256) indicates how many partitions a
solution will cover.
Suppose that the last byte of the subsidy address is 0xF0. This means
there are only 16 partitions left, so we define partition selection to wrap
around. This 51/256 miner must cover partitions 0xF0 - 0xFF and 0x00 -
0x23. In this way, all mining to date has covered all partitions.
The number of bits to be used might be able to be abstracted out to a
certain level. Perhaps a miner can indicate how many bits B the
partitioning should use in the CoinBase. The blocks for which a partition
miner claims responsibility are all those with a bit pattern of length B at
the end of their hash matching the the bits at the end of the first
output's public key in the generation transaction, as well as those blocks
with hashes for which the last B bits match any of the next N bit patterns
where for the largest integer N for which the claimed output is not less
than (subsidy+fees)*(N/(2^B)).
If you only store and validate against one partition, and that partition
has a solution already, then you would start working on the next block
(once you've validated the current one against your subset of the
blockchain). You could even broadcast a solution for that next block
before the previous block is fully solved, thus claiming a piece of the
next block reward (assuming the current block is valid on all partitions).
It seems that a miner who covers only one partition will be at a serious
disadvantage, but as the rate of incoming transactions increases, the
fraction of time he must spend validating (being about half of all other
miners who cover just one more partition) makes up for this disadvantage
somewhat. He is a "spry" miner and therefore wins more rewards during
times of very dense transaction volume. If we wish to encourage miners to
work on smaller partitions, we can provide a difficulty break for smaller
fractions of the work. In fact, the difficulty can be adjusted down for
the first solution, and then slowly back up to full for the last
partition(s).
This proposal has the added benefit of encouraging the assembly of blocks
by miners who work on single partitions to get them out there with a
one-partition solution.
On Wed, Dec 9, 2015 at 2:35 PM, Andrew via bitcoin-dev <
Post by Andrew via bitcoin-dev
Hi Akiva
https://bitcointalk.org/index.php?topic=1083345.0
It's good to see people talking about this :). I'm not quite convinced
with segregated witness, as it might mess up some things, but will take a
closer look.
On Dec 9, 2015 7:32 AM, "Loi Luu via bitcoin-dev" <
Post by Loi Luu via bitcoin-dev
Dear Akiva,
Its Loi Luu, one of the authors of the SCP protocol (
http://eprint.iacr.org/2015/1168.pdf ).
Before SCP, we had been thinking hard about how to do sharding
efficiently without degrading any security guarantee. A simple solution
which splits the coins, or TXs in to several partitions will just not work.
You have to answer more questions to have a good solutions. For example, I
wonder in your proposal, if a transaction spends a "coin" that ends in "1"
and creates a new coin that ends in "1", which partition should process the
transaction? What is the prior data needed to validate that kind of TXs?
The problem with other proposals, and probably yours as well, that we
see is that the amount of data that you need to broadcast immediately to
the network increases linearly with the number of TXs that the network can
process. Thus, sharding does not bring any advantage than simply using
other techniques to publish more blocks in one epoch (like Bitcoin-NG,
Ghost). The whole point of using sharding/ partition is to localize
the bandwidth used, and only broadcast only a minimal data to the network.
Clearly we are able to localize the bandwidth used with our SCP
protocol. The cost is that now recipients need to themselves verify
whether a transaction is double spending. However, we think that it is a
reasonable tradeoff, given the potential scalability that SCP can provides.
Thanks,
Loi Luu.
On Wed, Dec 9, 2015 at 12:27 AM, Akiva Lichtner via bitcoin-dev <
Post by Akiva Lichtner via bitcoin-dev
Hello,
I am seeking some expert feedback on an idea for scaling Bitcoin. As a
brief introduction: I work in the payment industry and I have twenty years'
experience in development. I have some experience with process groups and
ordering protocols too. I think I understand Satoshi's paper but I admit I
have not read the source code.
The idea is to run more than one simultaneous chain, each chain
defeating double spending on only part of the coin. The coin would be
partitioned by radix (or modulus, not sure what to call it.) For example in
order to multiply throughput by a factor of ten you could run ten parallel
chains, one would work on coin that ends in "0", one on coin that ends in
"1", and so on up to "9".
The number of chains could increase automatically over time based on
the moving average of transaction volume.
Blocks would have to contain the number of the partition they belong
to, and miners would have to round-robin through partitions so that an
attacker would not have an unfair advantage working on just one partition.
I don't think there is much impact to miners, but clients would have to
send more than one message in order to spend money. Client messages will
need to enumerate coin using some sort of compression, to save space. This
seems okay to me since often in computing client software does have to
break things up in equal parts (e.g. memory pages, file system blocks,) and
the client software could hide the details.
Best wishes for continued success to the project.
Regards,
Akiva
P.S. I found a funny anagram for SATOSHI NAKAMOTO: "NSA IS OOOK AT MATH"
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
I like to provide some work at no charge to prove my value. Do you need a
techie?
I own Litmocracy <http://www.litmocracy.com> and Meme Racing
<http://www.memeracing.net> (in alpha).
I'm the webmaster for The Voluntaryist <http://www.voluntaryist.com>
which now accepts Bitcoin.
I also code for The Dollar Vigilante <http://dollarvigilante.com/>.
"He ought to find it more profitable to play by the rules" - Satoshi
Nakamoto
--
I like to provide some work at no charge to prove my value. Do you need a
techie?
I own Litmocracy <http://www.litmocracy.com> and Meme Racing
<http://www.memeracing.net> (in alpha).
I'm the webmaster for The Voluntaryist <http://www.voluntaryist.com> which
now accepts Bitcoin.
I also code for The Dollar Vigilante <http://dollarvigilante.com/>.
"He ought to find it more profitable to play by the rules" - Satoshi
Nakamoto
Loading...