Discussion:
[bitcoin-dev] SHA1 collisions make Git vulnerable to attakcs by third-parties, not just repo maintainers
Peter Todd via bitcoin-dev
2017-02-23 18:14:09 UTC
Permalink
Worth noting: the impact of the SHA1 collison attack on Git is *not* limited
only to maintainers making maliciously colliding Git commits, but also
third-party's submitting pull-reqs containing commits, trees, and especially
files for which collisions have been found. This is likely to be exploitable in
practice with binary files, as reviewers aren't going to necessarily notice
garbage at the end of a file needed for the attack; if the attack can be
extended to constricted character sets like unicode or ASCII, we're in trouble
in general.

Concretely, I could prepare a pair of files with the same SHA1 hash, taking
into account the header that Git prepends when hashing files. I'd then submit
that pull-req to a project with the "clean" version of that file. Once the
maintainer merges my pull-req, possibly PGP signing the git commit, I then take
that signature and distribute the same repo, but with the "clean" version
replaced by the malicious version of the file.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Peter Todd via bitcoin-dev
2017-02-23 21:28:02 UTC
Permalink
Post by Peter Todd via bitcoin-dev
Worth noting: the impact of the SHA1 collison attack on Git is *not* limited
only to maintainers making maliciously colliding Git commits, but also
third-party's submitting pull-reqs containing commits, trees, and especially
files for which collisions have been found. This is likely to be exploitable in
practice with binary files, as reviewers aren't going to necessarily notice
garbage at the end of a file needed for the attack; if the attack can be
extended to constricted character sets like unicode or ASCII, we're in trouble
in general.
Concretely, I could prepare a pair of files with the same SHA1 hash, taking
into account the header that Git prepends when hashing files. I'd then submit
that pull-req to a project with the "clean" version of that file. Once the
maintainer merges my pull-req, possibly PGP signing the git commit, I then take
that signature and distribute the same repo, but with the "clean" version
replaced by the malicious version of the file.
Thinking about this a bit more, the most concerning avenue of attack is likely
to be tree objects, as I'll bet you you can construct tree objs with garbage at
the end that many review tools don't pick up on. :(
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Aymeric Vitte via bitcoin-dev
2017-02-23 23:57:45 UTC
Permalink
Maybe not, unlike frozen objects (certificates, etc), trees are supposed
to extend

Then you can perform progressive hash operations on the objects, ie
instead of hashing the intermediate hash of the objects you do it
continuously (ie instead of hashing the hash of hash file a + hash file
b + hash file c, wait for file d and then do the same, instead hash(file
a + file b + file c), when d comes compute the hash of (file a + file b
+ file c + file d), which implies each time to keep the intermediary
hash state because you are not going to recompute everything from the
beginning)

I have not worked on this since some time, so that's just thoughts, but
maybe it can render things much more difficult than computing two files
until the same hash is found

The only living example I know implementing this is the Tor protocol,
fact apparently unknown, this is probably why nobody cares and nobody is
willing to take it into account (please follow bwd/fwd [1] and see [2]),
this is not existing in any crypto implementations, unless you hack into
it, and this applies to progressive encryption too

[1]
https://lists.w3.org/Archives/Public/public-webcrypto-comments/2013Feb/0018.html


[2] https://github.com/whatwg/streams/issues/33#issuecomment-28554151
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
Worth noting: the impact of the SHA1 collison attack on Git is *not* limited
only to maintainers making maliciously colliding Git commits, but also
third-party's submitting pull-reqs containing commits, trees, and especially
files for which collisions have been found. This is likely to be exploitable in
practice with binary files, as reviewers aren't going to necessarily notice
garbage at the end of a file needed for the attack; if the attack can be
extended to constricted character sets like unicode or ASCII, we're in trouble
in general.
Concretely, I could prepare a pair of files with the same SHA1 hash, taking
into account the header that Git prepends when hashing files. I'd then submit
that pull-req to a project with the "clean" version of that file. Once the
maintainer merges my pull-req, possibly PGP signing the git commit, I then take
that signature and distribute the same repo, but with the "clean" version
replaced by the malicious version of the file.
Thinking about this a bit more, the most concerning avenue of attack is likely
to be tree objects, as I'll bet you you can construct tree objs with garbage at
the end that many review tools don't pick up on. :(
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Tim Ruffing via bitcoin-dev
2017-02-24 10:04:54 UTC
Permalink
Post by Aymeric Vitte via bitcoin-dev
I have not worked on this since some time, so that's just thoughts,
but maybe it can render things much more difficult
than       computing two files until the same hash is found
You basically rely on the idea that specific collisions are more
difficult to find. This trick or similar tricks will not help. (And
actually, the more files you add to the hash, the more freedom you give
the attacker.)

Even if certain collisions are more difficult to find today (which is
certainly true), the general rule is that someone will prove you wrong
in a year.

Even if ignore security entirely, switching to new hash function is
much simpler trying to fix the usage of a broken hash function.

Relying on SHA1 is hopeless. We have to get rid of it.

Best,
Tim
Aymeric Vitte via bitcoin-dev
2017-02-24 15:18:43 UTC
Permalink
Not sure that you really read deeply what I sent, because stating that
hashing files continuously instead of hashing the intermediate steps
just gives more latitude to the attacker can't be true when the attacker
has absolutely no control over the past files

I did not write this as a workaround to fix SHA1, which will be dead
soon or later but as maybe some general concept that could possibly help
whatever hash function you are using for objects that are not frozen but
extending (ie the original email stating that trees might be some kind
of worse candidates for collisions reminded me this), indeed it makes no
sense to patch SHA1 or play around, but this kind of proposal could
accompany the defunct

The drawback is that you have to keep the hash state when you close the
latest hash computation in order to start the next one

Then the question is: knowing the hash state, is it as easy to find a
collision between two files that will be computed in the next round than
finding a collision between two files only?

Knowing that you can probably modify the hash state with some
unpredictable patterns

Most likely the answer is: no, it's (astronomically?) more difficult

Please take it as a suggestion that might be explored (ps: I have the
code for this if needed) rather than an affirmation, still amazed as
shown in the few links provided (among others) that each time I raise
this subject nobody really pays attention (what's the use case?, etc)
and by the fact that it's apparently used by only one project in the
world and not supported by any library
Post by Tim Ruffing via bitcoin-dev
Post by Aymeric Vitte via bitcoin-dev
I have not worked on this since some time, so that's just thoughts,
but maybe it can render things much more difficult
than computing two files until the same hash is found
You basically rely on the idea that specific collisions are more
difficult to find. This trick or similar tricks will not help. (And
actually, the more files you add to the hash, the more freedom you give
the attacker.)
Even if certain collisions are more difficult to find today (which is
certainly true), the general rule is that someone will prove you wrong
in a year.
Even if ignore security entirely, switching to new hash function is
much simpler trying to fix the usage of a broken hash function.
Relying on SHA1 is hopeless. We have to get rid of it.
Best,
Tim
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Tim Ruffing via bitcoin-dev
2017-02-24 16:30:49 UTC
Permalink
Post by Aymeric Vitte via bitcoin-dev
Not sure that you really read deeply what I sent, because stating that
hashing files continuously instead of hashing the intermediate steps
just gives more latitude to the attacker can't be true when the attacker
has absolutely no control over the past files
What prevents the attacker to provide different past files when talking
to parties who are still in the initial state?

Then the question is: knowing the hash state, is it as easy to find a
Post by Aymeric Vitte via bitcoin-dev
collision between two files that will be computed in the next round than
finding a collision between two files only?
With the original usage of the hash function, the hash state is always
the initial state. Now that the attacker has some control over the hash
state even. In other words, if the original use of the hash function
was vulnerable, then your scheme is vulnerable for the initial state.

Concrete attack: If you can find x != y with H(x) = H(y), then you can
also find m, x != y, with H(m||x) = H(m||y), just by setting m = "".

Not sure if this is the right place to discuss that issue though...

Best,
Tim
Aymeric Vitte via bitcoin-dev
2017-02-24 17:29:50 UTC
Permalink
??? apparently we are not discussing the same thing

Maybe I did not provide the right links (reading them again I myself
don't find them so clear), see maybe again
https://github.com/whatwg/streams/issues/33#issuecomment-28045860

a - b - c -d

hash(a)

hash(a+b)

etc

But you are not going to rehash from the beginning, then:

update a --> keep the remaining bytes a_ (+ hash state 1) --> digest
a=hash(a)

update a_+b from hash state 1--> keep the remaining bytes b_ (+ hash
state 2) --> digest a_+b=hash(a+b)

etc

Basically that's similar to a real time progressive hash of chunks of a
file that you are streaming and therefore don't know what will come next
(per opposition to hashing a file that you already have), this could
apply to trees

This is different from something like:

hash(a)

hash(hash(a) +hash(b))

etc

There is no initial state, and the attacker can't modify what was
already hashed, to make it more difficult you can probably modify the
hash state N
Post by Tim Ruffing via bitcoin-dev
Post by Aymeric Vitte via bitcoin-dev
Not sure that you really read deeply what I sent, because stating that
hashing files continuously instead of hashing the intermediate steps
just gives more latitude to the attacker can't be true when the attacker
has absolutely no control over the past files
What prevents the attacker to provide different past files when talking
to parties who are still in the initial state?
Then the question is: knowing the hash state, is it as easy to find a
Post by Aymeric Vitte via bitcoin-dev
collision between two files that will be computed in the next round than
finding a collision between two files only?
With the original usage of the hash function, the hash state is always
the initial state. Now that the attacker has some control over the hash
state even. In other words, if the original use of the hash function
was vulnerable, then your scheme is vulnerable for the initial state.
Concrete attack: If you can find x != y with H(x) = H(y), then you can
also find m, x != y, with H(m||x) = H(m||y), just by setting m = "".
Not sure if this is the right place to discuss that issue though...
Best,
Tim
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
Steve Davis via bitcoin-dev
2017-02-24 23:49:36 UTC
Permalink
If the 20 byte SHA1 is now considered insecure (with good reason), what about RIPEMD-160 which is the foundation of Bitcoin addresses?

Is that also susceptible to such an attack vector?

What does that mean for old addresses?

etc

/s
Date: Fri, 24 Feb 2017 11:04:54 +0100
Subject: Re: [bitcoin-dev] SHA1 collisions make Git vulnerable to
attakcs by third-parties, not just repo maintainers
Content-Type: text/plain; charset="UTF-8"
Post by Aymeric Vitte via bitcoin-dev
I have not worked on this since some time, so that's just thoughts,
but maybe it can render things much more difficult
than???????computing two files until the same hash is found
You basically rely on the idea that specific collisions are more
difficult to find.?This trick or similar tricks will not help.?(And
actually, the more files you add to the hash, the more freedom you give
the attacker.)
Even if certain collisions are more difficult to find today (which is
certainly true), the general rule is that someone will prove you wrong
in a year.
Even if ignore security entirely, switching to new hash function is
much simpler trying to fix the usage of a broken hash function.
Relying on SHA1 is hopeless. We have to get rid of it.
Best,
Tim
------------------------------
Message: 2
Date: Fri, 24 Feb 2017 16:18:43 +0100
Subject: Re: [bitcoin-dev] SHA1 collisions make Git vulnerable to
attakcs by third-parties, not just repo maintainers
Content-Type: text/plain; charset=utf-8
Not sure that you really read deeply what I sent, because stating that
hashing files continuously instead of hashing the intermediate steps
just gives more latitude to the attacker can't be true when the attacker
has absolutely no control over the past files
I did not write this as a workaround to fix SHA1, which will be dead
soon or later but as maybe some general concept that could possibly help
whatever hash function you are using for objects that are not frozen but
extending (ie the original email stating that trees might be some kind
of worse candidates for collisions reminded me this), indeed it makes no
sense to patch SHA1 or play around, but this kind of proposal could
accompany the defunct
The drawback is that you have to keep the hash state when you close the
latest hash computation in order to start the next one
Then the question is: knowing the hash state, is it as easy to find a
collision between two files that will be computed in the next round than
finding a collision between two files only?
Knowing that you can probably modify the hash state with some
unpredictable patterns
Most likely the answer is: no, it's (astronomically?) more difficult
Please take it as a suggestion that might be explored (ps: I have the
code for this if needed) rather than an affirmation, still amazed as
shown in the few links provided (among others) that each time I raise
this subject nobody really pays attention (what's the use case?, etc)
and by the fact that it's apparently used by only one project in the
world and not supported by any library
Post by Aymeric Vitte via bitcoin-dev
Post by Aymeric Vitte via bitcoin-dev
I have not worked on this since some time, so that's just thoughts,
but maybe it can render things much more difficult
than computing two files until the same hash is found
You basically rely on the idea that specific collisions are more
difficult to find. This trick or similar tricks will not help. (And
actually, the more files you add to the hash, the more freedom you give
the attacker.)
Even if certain collisions are more difficult to find today (which is
certainly true), the general rule is that someone will prove you wrong
in a year.
Even if ignore security entirely, switching to new hash function is
much simpler trying to fix the usage of a broken hash function.
Relying on SHA1 is hopeless. We have to get rid of it.
Best,
Tim
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
------------------------------
Message: 3
Date: Fri, 24 Feb 2017 17:30:49 +0100
Subject: Re: [bitcoin-dev] SHA1 collisions make Git vulnerable to
attakcs by third-parties, not just repo maintainers
Content-Type: text/plain; charset="UTF-8"
Post by Aymeric Vitte via bitcoin-dev
Not sure that you really read deeply what I sent, because stating
that
hashing files continuously instead of hashing the intermediate steps
just gives more latitude to the attacker can't be true when the
attacker
has absolutely no control over the past files
What prevents the attacker to provide different past files when talking
to parties who are still in the initial state?
Then the question is: knowing the hash state, is it as easy to find a
Post by Aymeric Vitte via bitcoin-dev
collision between two files that will be computed in the next round
than
finding a collision between two files only?
With the original usage of the hash function, the hash state is always
the initial state. Now that the attacker has some control over the hash
state even. In other words, if the original use of the hash function
was vulnerable, then your scheme is vulnerable for the initial state.
Concrete attack: If you can find x != y with H(x) = H(y), then you can
also find m, x != y, with H(m||x) = H(m||y), just by setting m = "".
Not sure if this is the right place to discuss that issue though...
Best,
Tim
------------------------------
Message: 4
Date: Fri, 24 Feb 2017 18:29:50 +0100
Subject: Re: [bitcoin-dev] SHA1 collisions make Git vulnerable to
attakcs by third-parties, not just repo maintainers
Content-Type: text/plain; charset=windows-1252
??? apparently we are not discussing the same thing
Maybe I did not provide the right links (reading them again I myself
don't find them so clear), see maybe again
https://github.com/whatwg/streams/issues/33#issuecomment-28045860
a - b - c -d
hash(a)
hash(a+b)
etc
update a --> keep the remaining bytes a_ (+ hash state 1) --> digest
a=hash(a)
update a_+b from hash state 1--> keep the remaining bytes b_ (+ hash
state 2) --> digest a_+b=hash(a+b)
etc
Basically that's similar to a real time progressive hash of chunks of a
file that you are streaming and therefore don't know what will come next
(per opposition to hashing a file that you already have), this could
apply to trees
hash(a)
hash(hash(a) +hash(b))
etc
There is no initial state, and the attacker can't modify what was
already hashed, to make it more difficult you can probably modify the
hash state N
Post by Aymeric Vitte via bitcoin-dev
Post by Aymeric Vitte via bitcoin-dev
Not sure that you really read deeply what I sent, because stating
that
hashing files continuously instead of hashing the intermediate steps
just gives more latitude to the attacker can't be true when the
attacker
has absolutely no control over the past files
What prevents the attacker to provide different past files when talking
to parties who are still in the initial state?
Then the question is: knowing the hash state, is it as easy to find a
Post by Aymeric Vitte via bitcoin-dev
collision between two files that will be computed in the next round
than
finding a collision between two files only?
With the original usage of the hash function, the hash state is always
the initial state. Now that the attacker has some control over the hash
state even. In other words, if the original use of the hash function
was vulnerable, then your scheme is vulnerable for the initial state.
Concrete attack: If you can find x != y with H(x) = H(y), then you can
also find m, x != y, with H(m||x) = H(m||y), just by setting m = "".
Not sure if this is the right place to discuss that issue though...
Best,
Tim
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Zcash wallets made simple: https://github.com/Ayms/zcash-wallets
Bitcoin wallets made simple: https://github.com/Ayms/bitcoin-wallets
Get the torrent dynamic blocklist: http://peersm.com/getblocklist
Check the 10 M passwords list: http://peersm.com/findmyass
Anti-spies and private torrents, dynamic blocklist: http://torrent-live.org
Peersm : http://www.peersm.com
torrent-live: https://github.com/Ayms/torrent-live
node-Tor : https://www.github.com/Ayms/node-Tor
GitHub : https://www.github.com/Ayms
------------------------------
Message: 5
Date: Fri, 24 Feb 2017 14:20:19 -0800
Cc: Bitcoin Protocol Discussion
Subject: Re: [bitcoin-dev] A Better MMR Definition
Content-Type: text/plain; charset="utf-8"
So your idea is to cluster entries by entry time because newer things are
more likely to leave and updating multiple things near each other is
cheaper?
That can be done with my tool. Instead of using hashes for the values being
stored, you use position entries. The first entry gets a value of all
zeros, the next one a one followed by all zeros, then the next two
correspond to the first two with the second bit flipped to one, then the
next four the first four with the third bit flipped to one, etc. It
probably performs a little bit better to do it two bits at a time instead
of one so that the entries are 00, 01, 10, 11, 0001, 0010, 0011, 0101,
0110, 0111, 1001, etc. If you were to really use this you'd probably want
to to add some optimizations to use the fact that the terminals fit in 64
bits instead of 256, but it mostly works unchanged, and gets whatever
benefits there are to this clustering plus the high performance
implementation tricks I've built which I keep complaining that nobody's
giving feedback on.
I'm not sold on this being a win: The empirical access patterns are
unknown, it requires an extra cache miss per lookup to find the entry
number, it may be that everything is optimized well enough without it for
there to be no meaningful gains, and it's a bunch of extra complexity. What
should be done is that a plain vanilla UTXO set solution is optimized as
well as it can be first, and then the insertion ordering trick is tried as
an optimization to see if it's an improvement. Without that baseline
there's no meaningful basis for comparison, and I'm quite confident that a
naive implementation which just allocates individual nodes will
underperform the thing I've come up with, even without adding optimizations
related to fitting in 64 bits.
Post by Aymeric Vitte via bitcoin-dev
Post by Aymeric Vitte via bitcoin-dev
Glad we're on the same page with regard to what's possible in TXO
commitments.
Secondly, am I correct in saying your UTXO commitments scheme requires
random
access? While you describe it as a "merkle set", obviously to be
merkelized
Post by Aymeric Vitte via bitcoin-dev
it'll have to have an ordering of some kind. What do you propose that
ordering
to be?
The ordering is by the bits in the hash. Technically it's a Patricia
Trie.
Post by Aymeric Vitte via bitcoin-dev
I'm using 'merkle tree' to refer to basically anything with a hash root.
The hash of what? The values in the set?
Post by Aymeric Vitte via bitcoin-dev
Maybe more specifically, what exact values do you propose to be in the
set?
Post by Aymeric Vitte via bitcoin-dev
That is unspecified in the implementation, it just takes a 256 bit value
which is presumably a hash of something. The intention is to nail down a
simple format and demonstrate good performance and leave those semantics
to
Post by Aymeric Vitte via bitcoin-dev
a higher layer. The simplest thing would be to hash together the txid and
output number.
Ok, so let's assume the values in the set are the unspent outpoints.
Since we're ordering by the hash of the values in the set, outpoints will
be
distributed uniformly in the set, and thus the access pattern of data in
the
set is uniform.
Now let's fast-forward 10 years. For the sake of argument, assume that for
every 1 UTXO in the set that corresponds to funds in someone's wallet that
are
likely to be spent, there are 2^12 = 4096 UTXO's that have been permanently
lost (and/or created in spam attacks) and thus will never be spent.
Since lost UTXO's are *also* uniformly distributed, if I'm processing a new
block that spends 2^12 = 4096 UTXO's, on average for each UTXO spent, I'll
have to update log2(4096) = 12 more digests than I would have had those
"dead"
UTXO's not existed.
Concretely, imagine our UTXO set had just 8 values in it, and we were
updating
#
/ \
/ \
/ \
/ \
/ \
# #
/ \ / \
/ \ / \
# . . #
/ \ / \ / \ / \
. X . . . . X .
To mark two coins as spent, we've had to update 5 inner nodes.
Now let's look at what happens in an insertion-ordered TXO commitment
scheme.
For sake of argument, let's assume the best possible case, where every UTXO
spent in that same block was recently created. Since the UTXO's are
recently
created, chances are almost every single one of those "dead" UTXO's will
have
been created in the past. Thus, since this is an insertion-ordered data
structure, those UTXO's exist in an older part of the data structure that
our
new block doesn't need to modify at all.
Concretely, again let's imagine a TXO commitment with 8 values in it, and
two
#
/ \
/ \
/ \
/ \
/ \
. #
/ \ / \
/ \ / \
. . . #
/ \ / \ / \ / \
. . . . . . X X
To mark two coins as spent, we've only had to update 3 inner nodes; while
our
tree is higher with those lost coins, those extra inner nodes are amortised
across all the coins we have to update.
The situation gets even better when we look at the *new* UTXO's that our
block
creates. Suppose our UTXO set has size n. To mark a single coin as spent,
we
have to update log2(n) inner nodes. We do get to amortise this a bit at
the top
levels in the tree, but even if we assume the amortisation is totally free,
we're updating at least log2(n) - log2(m) inner nodes "under" the amortised
nodes at the top of the tree for *each* new node.
Meanwhile with an insertion-ordered TXO commitment, each new UTXO added to
the
data set goes in the same place - the end. So almost none of the existing
data
needs to be touched to add the new UTXOs. Equally, the hashing required
for the
new UTXO's can be done in an incremental fashion that's very L1/L2 cache
friendly.
tl;dr: Precisely because access patterns in TXO commitments are *not*
uniform,
I think we'll find that from a L1/L2/etc cache perspective alone, TXO
commitments will result in better performance than UTXO commitments.
Now it is true that Bitcoin's current design means we'll need a map of
confirmed outpoints to TXO insertion order indexes. But it's not
particularly
hard to add that "metadata" to transactions on the P2P layer in the same
way
that segwit added witnesses to transactions without modifying how txids
were
calculated; if you only connect to peers who provide you with TXO index
information in blocks and transactions, you don't need to keep that map
yourself.
it's
just a 8-byte max index rather than a 40 byte outpoint.
--
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.linuxfoundation.org/pipermail/bitcoin-dev/attachments/20170224/63ab2731/attachment.html>
------------------------------
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
End of bitcoin-dev Digest, Vol 21, Issue 34
*******************************************
Peter Todd via bitcoin-dev
2017-02-25 01:01:22 UTC
Permalink
Post by Steve Davis via bitcoin-dev
If the 20 byte SHA1 is now considered insecure (with good reason), what about RIPEMD-160 which is the foundation of Bitcoin addresses?
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.

AFAIK there aren't any known weaknesses in RIPEMD160, but it also hasn't been
as closely studied as more common hash algorithms. That said, Bitcoin uses
RIPEMD160(SHA256(msg)), which may make creating collisions harder if an attack
is found than if it used RIPEMD160 alone.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Steve Davis via bitcoin-dev
2017-02-25 12:04:28 UTC
Permalink
Post by Peter Todd via bitcoin-dev
Post by Steve Davis via bitcoin-dev
If the 20 byte SHA1 is now considered insecure (with good reason), what about RIPEMD-160 which is the foundation of Bitcoin addresses?
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
AFAIK there aren't any known weaknesses in RIPEMD160,
…so far. I wonder how long that vacation will last?
Post by Peter Todd via bitcoin-dev
but it also hasn't been
as closely studied as more common hash algorithms.
...but we can be sure that it will be, since the dollar value held in existing utxos continues to increase...
Post by Peter Todd via bitcoin-dev
That said, Bitcoin uses
RIPEMD160(SHA256(msg)), which may make creating collisions harder if an attack
is found than if it used RIPEMD160 alone.
Does that offer any greater protection? That’s not so clear to me as the outputs (at least for p2pkh) only verify the public key against the final 20 byte hash. Specifically, in the first (notional) case the challenge would be to find a private key that has a public key that hashes to the final hash. In the second (realistic) case, you merely need to add the sha256 hash into the problem, which doesn’t seem to me to increase the difficulty by any significant amount?


/s
Leandro Coutinho via bitcoin-dev
2017-02-25 14:50:30 UTC
Permalink
Google recommeds "migrate to safer cryptographic hashes such as SHA-256 and
SHA-3"
It does not mention RIPEMD-160

https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html?m=1


Em 25/02/2017 10:47, "Steve Davis via bitcoin-dev" <
Post by Peter Todd via bitcoin-dev
Post by Steve Davis via bitcoin-dev
If the 20 byte SHA1 is now considered insecure (with good reason), what
about RIPEMD-160 which is the foundation of Bitcoin addresses?
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
AFAIK there aren't any known weaknesses in RIPEMD160,

so far. I wonder how long that vacation will last?
Post by Peter Todd via bitcoin-dev
but it also hasn't been
as closely studied as more common hash algorithms.
...but we can be sure that it will be, since the dollar value held in
existing utxos continues to increase...
Post by Peter Todd via bitcoin-dev
That said, Bitcoin uses
RIPEMD160(SHA256(msg)), which may make creating collisions harder if an attack
is found than if it used RIPEMD160 alone.
Does that offer any greater protection? That’s not so clear to me as the
outputs (at least for p2pkh) only verify the public key against the final
20 byte hash. Specifically, in the first (notional) case the challenge
would be to find a private key that has a public key that hashes to the
final hash. In the second (realistic) case, you merely need to add the
sha256 hash into the problem, which doesn’t seem to me to increase the
difficulty by any significant amount?


/s
Ethan Heilman via bitcoin-dev
2017-02-25 16:10:02 UTC
Permalink
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance. Assuming
RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle), collisions
can be generated in 2^80 queries (actually detecting these collisions
requires some time-memory additional trade-offs). The Bitcoin network at
the current hash rate performs roughly SHA-256 ~2^78 queries a day or 2^80
queries every four days. Without any break in RIPEMD-160(SHA-256(msg)) the
US could build an ASIC datacenter and produce RIPEMD-160 collisions for a
fraction of its yearly cryptologic budget.

The impact of collisions in RIPEMD-160(SHA-256(msg)) according to "On
Bitcoin Security in the Presence of Broken Crypto Primitives"(
Post by Peter Todd via bitcoin-dev
Collisions are similar, though in this case both public keys are under the
adversary’s control, and again the adversary does not have access to the
private keys. In both scenarios, there is a question of nonrepudiation
external to the protocol itself: by presenting a second pre-image of a key
used to sign a transaction, a user/adversary can claim that his coins were
stolen.

How would such an event effect the price of Bitcoin when headlines are
"Bitcoin's Cryptography Broken"? How much money could someone make by
playing the market in this way?

For both reasons of credibility and good engineering (safety
margins) Bitcoin should strive to always use cryptography which is beyond
reproach.


On Sat, Feb 25, 2017 at 9:50 AM, Leandro Coutinho via bitcoin-dev <
Post by Peter Todd via bitcoin-dev
Google recommeds "migrate to safer cryptographic hashes such as SHA-256
and SHA-3"
It does not mention RIPEMD-160
https://security.googleblog.com/2017/02/announcing-first-
sha1-collision.html?m=1
On Fri, Feb 24, 2017 at 05:49:36PM -0600, Steve Davis via bitcoin-dev
Post by Steve Davis via bitcoin-dev
If the 20 byte SHA1 is now considered insecure (with good reason), what
about RIPEMD-160 which is the foundation of Bitcoin addresses?
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
AFAIK there aren't any known weaknesses in RIPEMD160,

so far. I wonder how long that vacation will last?
but it also hasn't been
as closely studied as more common hash algorithms.
...but we can be sure that it will be, since the dollar value held in
existing utxos continues to increase...
That said, Bitcoin uses
RIPEMD160(SHA256(msg)), which may make creating collisions harder if an
attack
is found than if it used RIPEMD160 alone.
Does that offer any greater protection? That’s not so clear to me as the
outputs (at least for p2pkh) only verify the public key against the final
20 byte hash. Specifically, in the first (notional) case the challenge
would be to find a private key that has a public key that hashes to the
final hash. In the second (realistic) case, you merely need to add the
sha256 hash into the problem, which doesn’t seem to me to increase the
difficulty by any significant amount?
/s
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Alice Wonder via bitcoin-dev
2017-02-25 18:19:11 UTC
Permalink
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance.
Assuming RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random
oracle), collisions can be generated in 2^80 queries (actually detecting
these collisions requires some time-memory additional trade-offs). The
Bitcoin network at the current hash rate performs roughly SHA-256 ~2^78
queries a day or 2^80 queries every four days.
You have to not only produce a ripemd160 collision, you have to produce
a collision that is also a valid sha-256 hash - and that's much much
much more difficult.
Ethan Heilman via bitcoin-dev
2017-02-25 18:36:49 UTC
Permalink
You have to not only produce a ripemd160 collision, you have to produce a
collision that is also a valid sha-256 hash - and that's much much much
more difficult.

I agree that merely finding a collision in RIPEMD-160 will be hard to use
in Bitcoin.

However finding a collision in RIPEMD-160(SHA-256(msg)) via bruteforce
(2^80 queries) is not particular more difficult than finding a collision in
RIPEMD-160 via brute force. Furthermore if you find a collision in
RIPEMD-160(SHA-256(msg)) you also get a valid SHA-256 hash for which you
know the preimage.


On Sat, Feb 25, 2017 at 1:19 PM, Alice Wonder via bitcoin-dev <
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance.
Assuming RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random
oracle), collisions can be generated in 2^80 queries (actually detecting
these collisions requires some time-memory additional trade-offs). The
Bitcoin network at the current hash rate performs roughly SHA-256 ~2^78
queries a day or 2^80 queries every four days.
You have to not only produce a ripemd160 collision, you have to produce a
collision that is also a valid sha-256 hash - and that's much much much
more difficult.
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Shin'ichiro Matsuo via bitcoin-dev
2017-02-25 17:45:36 UTC
Permalink
We should distinguish collision resistance from 2nd pre-image resistance, in general.

As previously written, we should care both hash output length and algorithm itself. The weakness of SHA-0 (preliminary version of SHA-1) was reported in 2004, then many research on the structure of SHA-1 were conducted. In the case of SHA-2, it is harder than SHA-1 to find collisions.

Existing security consideration and evaluation criteria were extensively discussed in the NIST SHA-3 competition. Please see the following sites.

https://ehash.iaik.tugraz.at/wiki/The_SHA-3_Zoo
https://ehash.iaik.tugraz.at/wiki/Cryptanalysis_Categories

We need similar analysis on RIPEMD160 and impacts of attacks on (RIPEMD160(SHA2(msg)).

We can also refer the security assumption of hash chain in Asiacrypt 2004 Paper.
https://home.cyber.ee/~ahtbu/timestampsec.pdf

In the discussion of SHA3 competition, we choose another hash design structure, so called "sponge structure." This leads diversity of design principles of hash function and gives resilience even when one hash design structure becomes vulnerable. As Peter Todd wrote, discussion on design structure and algorithm is important. Discussions on all of algorithm, output length and security requirements are needed.

At some future moment, we should think about transition of underlying hash functions. I’m working on this subject and will present an idea at IEEE S&B.

Shin’ichiro Matsuo
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance. Assuming RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle), collisions can be generated in 2^80 queries (actually detecting these collisions requires some time-memory additional trade-offs). The Bitcoin network at the current hash rate performs roughly SHA-256 ~2^78 queries a day or 2^80 queries every four days. Without any break in RIPEMD-160(SHA-256(msg)) the US could build an ASIC datacenter and produce RIPEMD-160 collisions for a fraction of its yearly cryptologic budget.
Post by Peter Todd via bitcoin-dev
Collisions are similar, though in this case both public keys are under the adversary’s control, and again the adversary does not have access to the private keys. In both scenarios, there is a question of nonrepudiation external to the protocol itself: by presenting a second pre-image of a key used to sign a transaction, a user/adversary can claim that his coins were stolen.
How would such an event effect the price of Bitcoin when headlines are "Bitcoin's Cryptography Broken"? How much money could someone make by playing the market in this way?
For both reasons of credibility and good engineering (safety margins) Bitcoin should strive to always use cryptography which is beyond reproach.
Google recommeds "migrate to safer cryptographic hashes such as SHA-256 and SHA-3"
It does not mention RIPEMD-160
https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html?m=1
Post by Peter Todd via bitcoin-dev
Post by Steve Davis via bitcoin-dev
If the 20 byte SHA1 is now considered insecure (with good reason), what about RIPEMD-160 which is the foundation of Bitcoin addresses?
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
AFAIK there aren't any known weaknesses in RIPEMD160,
…so far. I wonder how long that vacation will last?
Post by Peter Todd via bitcoin-dev
but it also hasn't been
as closely studied as more common hash algorithms.
...but we can be sure that it will be, since the dollar value held in existing utxos continues to increase...
Post by Peter Todd via bitcoin-dev
That said, Bitcoin uses
RIPEMD160(SHA256(msg)), which may make creating collisions harder if an attack
is found than if it used RIPEMD160 alone.
Does that offer any greater protection? That’s not so clear to me as the outputs (at least for p2pkh) only verify the public key against the final 20 byte hash. Specifically, in the first (notional) case the challenge would be to find a private key that has a public key that hashes to the final hash. In the second (realistic) case, you merely need to add the sha256 hash into the problem, which doesn’t seem to me to increase the difficulty by any significant amount?
/s
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Henning Kopp via bitcoin-dev
2017-02-27 09:15:29 UTC
Permalink
Hi all,

I did not follow the whole discussion, but wanted to throw in some
literature on the failure of crypto primitives in Bitcoin.

There is a paper which discusses the problems, but does not give any
remedies: https://eprint.iacr.org/2016/167.pdf

And there are also contingency plans on the wiki:
https://en.bitcoin.it/wiki/Contingency_plans These are not very
detailed and my impression is that this information should be viewed
very critically (E.g., when ECDSA is broken, the suggested vague
response is "Switch to the stronger algorithm." Yeah. And "Code for
all of this should be prepared." Surely. As far as I know, there is no
such code and no-one is working on it).

Best,
Henning
Post by Shin'ichiro Matsuo via bitcoin-dev
We should distinguish collision resistance from 2nd pre-image resistance, in general.
As previously written, we should care both hash output length and algorithm itself. The weakness of SHA-0 (preliminary version of SHA-1) was reported in 2004, then many research on the structure of SHA-1 were conducted. In the case of SHA-2, it is harder than SHA-1 to find collisions.
Existing security consideration and evaluation criteria were extensively discussed in the NIST SHA-3 competition. Please see the following sites.
https://ehash.iaik.tugraz.at/wiki/The_SHA-3_Zoo
https://ehash.iaik.tugraz.at/wiki/Cryptanalysis_Categories
We need similar analysis on RIPEMD160 and impacts of attacks on (RIPEMD160(SHA2(msg)).
We can also refer the security assumption of hash chain in Asiacrypt 2004 Paper.
https://home.cyber.ee/~ahtbu/timestampsec.pdf
In the discussion of SHA3 competition, we choose another hash design structure, so called "sponge structure." This leads diversity of design principles of hash function and gives resilience even when one hash design structure becomes vulnerable. As Peter Todd wrote, discussion on design structure and algorithm is important. Discussions on all of algorithm, output length and security requirements are needed.
At some future moment, we should think about transition of underlying hash functions. I’m working on this subject and will present an idea at IEEE S&B.
Shin’ichiro Matsuo
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance. Assuming RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle), collisions can be generated in 2^80 queries (actually detecting these collisions requires some time-memory additional trade-offs). The Bitcoin network at the current hash rate performs roughly SHA-256 ~2^78 queries a day or 2^80 queries every four days. Without any break in RIPEMD-160(SHA-256(msg)) the US could build an ASIC datacenter and produce RIPEMD-160 collisions for a fraction of its yearly cryptologic budget.
Post by Peter Todd via bitcoin-dev
Collisions are similar, though in this case both public keys are under the adversary’s control, and again the adversary does not have access to the private keys. In both scenarios, there is a question of nonrepudiation external to the protocol itself: by presenting a second pre-image of a key used to sign a transaction, a user/adversary can claim that his coins were stolen.
How would such an event effect the price of Bitcoin when headlines are "Bitcoin's Cryptography Broken"? How much money could someone make by playing the market in this way?
For both reasons of credibility and good engineering (safety margins) Bitcoin should strive to always use cryptography which is beyond reproach.
Google recommeds "migrate to safer cryptographic hashes such as SHA-256 and SHA-3"
It does not mention RIPEMD-160
https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html?m=1
Post by Peter Todd via bitcoin-dev
Post by Steve Davis via bitcoin-dev
If the 20 byte SHA1 is now considered insecure (with good reason), what about RIPEMD-160 which is the foundation of Bitcoin addresses?
SHA1 is insecure because the SHA1 algorithm is insecure, not because 160bits isn't enough.
AFAIK there aren't any known weaknesses in RIPEMD160,
…so far. I wonder how long that vacation will last?
Post by Peter Todd via bitcoin-dev
but it also hasn't been
as closely studied as more common hash algorithms.
...but we can be sure that it will be, since the dollar value held in existing utxos continues to increase...
Post by Peter Todd via bitcoin-dev
That said, Bitcoin uses
RIPEMD160(SHA256(msg)), which may make creating collisions harder if an attack
is found than if it used RIPEMD160 alone.
Does that offer any greater protection? That’s not so clear to me as the outputs (at least for p2pkh) only verify the public key against the final 20 byte hash. Specifically, in the first (notional) case the challenge would be to find a private key that has a public key that hashes to the final hash. In the second (realistic) case, you merely need to add the sha256 hash into the problem, which doesn’t seem to me to increase the difficulty by any significant amount?
/s
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
Henning Kopp
Institute of Distributed Systems
Ulm University, Germany

Office: O27 - 3402
Phone: +49 731 50-24138
Web: http://www.uni-ulm.de/in/vs/~kopp
Peter Todd via bitcoin-dev
2017-02-25 19:12:01 UTC
Permalink
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance. Assuming
RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle), collisions
That's something that we're well aware of; there have been a few discussions on
this list about how P2SH's 160-bits is insufficient in certain use-cases such
as multisig.

However, remember that a 160-bit *security level* is sufficient, and RIPEMD160
has 160-bit security against preimage attacks. Thus things like
pay-to-pubkey-hash are perfectly secure: sure you could generate two pubkeys
that have the same RIPEMD160(SHA256()) digest, but if someone does that it
doesn't cause the Bitcoin network itself any harm, and doing so is something
you choose to do to yourself.

In any case, segwit will provide a 256-bit pay-to-witness-script-hash(1), which
provides a 128-bit security level against collision attacks.

1) https://github.com/bitcoin/bips/blob/master/bip-0143.mediawiki#Native_P2WSH
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Russell O'Connor via bitcoin-dev
2017-02-25 20:53:12 UTC
Permalink
On Sat, Feb 25, 2017 at 2:12 PM, Peter Todd via bitcoin-dev <
Post by Ethan Heilman via bitcoin-dev
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance.
Assuming
Post by Peter Todd via bitcoin-dev
RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle),
collisions
That's something that we're well aware of; there have been a few discussions on
this list about how P2SH's 160-bits is insufficient in certain use-cases such
as multisig.
However, remember that a 160-bit *security level* is sufficient, and RIPEMD160
has 160-bit security against preimage attacks. Thus things like
pay-to-pubkey-hash are perfectly secure: sure you could generate two pubkeys
that have the same RIPEMD160(SHA256()) digest, but if someone does that it
doesn't cause the Bitcoin network itself any harm, and doing so is something
you choose to do to yourself.
Be aware that the issue is more problematic for more complex contracts.
For example, you are building a P2SH 2-of-2 multisig together with someone
else if you are not careful, party A can hand their key over to party B,
who can may try to generate a collision between their second key and
another 2-of-2 multisig where they control both keys. See
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-January/012205.html
Peter Todd via bitcoin-dev
2017-02-25 21:04:06 UTC
Permalink
Post by Russell O'Connor via bitcoin-dev
On Sat, Feb 25, 2017 at 2:12 PM, Peter Todd via bitcoin-dev <
Post by Ethan Heilman via bitcoin-dev
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance.
Assuming
Post by Peter Todd via bitcoin-dev
RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle),
collisions
That's something that we're well aware of; there have been a few discussions on
this list about how P2SH's 160-bits is insufficient in certain use-cases such
as multisig.
However, remember that a 160-bit *security level* is sufficient, and RIPEMD160
has 160-bit security against preimage attacks. Thus things like
pay-to-pubkey-hash are perfectly secure: sure you could generate two pubkeys
that have the same RIPEMD160(SHA256()) digest, but if someone does that it
doesn't cause the Bitcoin network itself any harm, and doing so is something
you choose to do to yourself.
Be aware that the issue is more problematic for more complex contracts.
For example, you are building a P2SH 2-of-2 multisig together with someone
else if you are not careful, party A can hand their key over to party B,
who can may try to generate a collision between their second key and
another 2-of-2 multisig where they control both keys. See
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2016-January/012205.html
I'm very aware of that, in fact I think I may have even been the first person
to post on this list the commit-reveal mitigation.

Note how I said earlier in the message you're replying to that "P2SH's 160-bits
is insufficient in certain use-cases such as multisig"
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Dave Scotese via bitcoin-dev
2017-02-25 21:21:56 UTC
Permalink
I was under the impression that RIPEMD160(SHA256(msg)) is used to turn a
PUBLIC key (msg) into a bitcoin address, so yeah, you could identify
ANOTHER (or the same, I guess - how would you know?) public key that has
the same bitcoin address if RIPEMD-160 collisions are easy, but I don't see
how that has any effect on anyone. Maybe I'm restating what Peter wrote.
If so, confirmation would be nice.

On Sat, Feb 25, 2017 at 1:04 PM, Peter Todd via bitcoin-dev <
Post by Peter Todd via bitcoin-dev
Post by Russell O'Connor via bitcoin-dev
On Sat, Feb 25, 2017 at 2:12 PM, Peter Todd via bitcoin-dev <
Post by Ethan Heilman via bitcoin-dev
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance.
Assuming
Post by Peter Todd via bitcoin-dev
RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle),
collisions
That's something that we're well aware of; there have been a few discussions on
this list about how P2SH's 160-bits is insufficient in certain
use-cases
Post by Russell O'Connor via bitcoin-dev
Post by Ethan Heilman via bitcoin-dev
such
as multisig.
However, remember that a 160-bit *security level* is sufficient, and RIPEMD160
has 160-bit security against preimage attacks. Thus things like
pay-to-pubkey-hash are perfectly secure: sure you could generate two pubkeys
that have the same RIPEMD160(SHA256()) digest, but if someone does
that it
Post by Russell O'Connor via bitcoin-dev
Post by Ethan Heilman via bitcoin-dev
doesn't cause the Bitcoin network itself any harm, and doing so is something
you choose to do to yourself.
Be aware that the issue is more problematic for more complex contracts.
For example, you are building a P2SH 2-of-2 multisig together with
someone
Post by Russell O'Connor via bitcoin-dev
else if you are not careful, party A can hand their key over to party B,
who can may try to generate a collision between their second key and
another 2-of-2 multisig where they control both keys. See
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/
2016-January/012205.html
I'm very aware of that, in fact I think I may have even been the first person
to post on this list the commit-reveal mitigation.
Note how I said earlier in the message you're replying to that "P2SH's 160-bits
is insufficient in certain use-cases such as multisig"
--
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
--
I like to provide some work at no charge to prove my value. Do you need a
techie?
I own Litmocracy <http://www.litmocracy.com> and Meme Racing
<http://www.memeracing.net> (in alpha).
I'm the webmaster for The Voluntaryist <http://www.voluntaryist.com> which
now accepts Bitcoin.
I also code for The Dollar Vigilante <http://dollarvigilante.com/>.
"He ought to find it more profitable to play by the rules" - Satoshi
Nakamoto
Peter Todd via bitcoin-dev
2017-02-25 21:40:18 UTC
Permalink
Yea, well. I don’t think it is ethical to post instructions without an associated remediation (BIP) if you don’t see the potential attack.
I can't agree with you at all there: we're still at the point where the
computational costs of such attacks limit their real-world impact, which is
exactly when you want the *maximum* exposure to what they are and what the
risks are, so that people develop mitigations.

Keeping details secret tends to keep the attacks out of public view, which
might be a good trade-off in a situation where the attacks are immediately
practical and the need to deploy a fix is well understood. But we're in the
exact opposite situation.
I was rather hoping that we could have a fuller discussion of what the best practical response would be to such an issue?
Deploying segwit's 256-bit digests is a response that's already fully coded and
ready to deploy, with the one exception of a new address format. That address
format is being actively worked on, and could be deployed relatively quickly if
needed.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Steve Davis via bitcoin-dev
2017-02-25 21:54:16 UTC
Permalink
Hi Peter,
Post by Peter Todd via bitcoin-dev
Yea, well. I don’t think it is ethical to post instructions without an associated remediation (BIP) if you don’t see the potential attack.
I can't agree with you at all there: we're still at the point where the
computational costs of such attacks limit their real-world impact, which is
exactly when you want the *maximum* exposure to what they are and what the
risks are, so that people develop mitigations.
I agree with the latter part of your statement but am actually much less confident about the first part… I need to run some numbers on that.
Post by Peter Todd via bitcoin-dev
Keeping details secret tends to keep the attacks out of public view, which
might be a good trade-off in a situation where the attacks are immediately
practical and the need to deploy a fix is well understood. But we're in the
exact opposite situation.
I was rather hoping that we could have a fuller discussion of what the best practical response would be to such an issue?
Deploying segwit's 256-bit digests is a response that's already fully coded and
ready to deploy, with the one exception of a new address format. That address
format is being actively worked on, and could be deployed relatively quickly if
needed.
I really, really don’t want to get into it but segwit has many aspects that are less appealing, not least of which being the amount of time it would take to reach the critical mass.

Surely there's a number of alternative approaches which could be explored, even if only to make a fair assessment of a best response?

/s
Pieter Wuille via bitcoin-dev
2017-02-25 22:14:44 UTC
Permalink
On Feb 25, 2017 14:09, "Steve Davis via bitcoin-dev" <
bitcoin-***@lists.linuxfoundation.org> wrote:

Hi Peter,


I really, really don’t want to get into it but segwit has many aspects that
are less appealing, not least of which being the amount of time it would
take to reach the critical mass.

Surely there's a number of alternative approaches which could be explored,
even if only to make a fair assessment of a best response?


Any alternative to move us away from RIPEMD160 would require:
* A drafting of a softfork proposal, implementation, testing, review.
* A new address format
* Miners accepting the new consensus rules
* Wallets adopting the new address format, both on the sender side and
receiver side (which requires new signatures).

I.e., exactly the same as segwit, for which most of these are already done.
And it would still only apply to wallets adopting it.
--
Pieter
Ethan Heilman via bitcoin-dev
2017-02-25 22:34:38 UTC
Permalink
I strongly encourage Bitcoin to move from 80-bit collision resistance
(RIPEMD-160) to 128-bit collision resistance (SHA-256).

On Sat, Feb 25, 2017 at 5:14 PM, Pieter Wuille via bitcoin-dev <
Post by Steve Davis via bitcoin-dev
Hi Peter,
I really, really don’t want to get into it but segwit has many aspects
that are less appealing, not least of which being the amount of time it
would take to reach the critical mass.
Surely there's a number of alternative approaches which could be explored,
even if only to make a fair assessment of a best response?
* A drafting of a softfork proposal, implementation, testing, review.
* A new address format
* Miners accepting the new consensus rules
* Wallets adopting the new address format, both on the sender side and
receiver side (which requires new signatures).
I.e., exactly the same as segwit, for which most of these are already
done. And it would still only apply to wallets adopting it.
--
Pieter
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Pieter Wuille via bitcoin-dev
2017-02-26 06:36:25 UTC
Permalink
On Feb 25, 2017 22:26, "Steve Davis" <***@gmail.com> wrote:

Hi Pieter,
<snipped>
“Any alternative”? What about reverting to:

[<public_key>, OP_CHECKSIG]


snip


Could that be the alternative?


Ok, fair enough, that is an alternative that avoids the 160-bit hash
function, but not where it matters. The 80-bit collision attack only
applies to jointly constructed addresses like multisig P2SH, not single-key
ones. As far as I know for those we only rely preimage security, and
RIPEMD160 has 160 bit security there, which is even more than our ECDSA
signatures offer.
--
Pieter
Steve Davis via bitcoin-dev
2017-02-26 07:16:37 UTC
Permalink
The 80-bit collision attack only applies to jointly constructed addresses like multisig P2SH, not single-key ones.
That’s the part I’m less convinced about, and why I asked the original question re SHA1 vs RIPEMD.

I’m checking my own numbers (and as you’ll appreciate it’s a powers of ten thing), but I do see a vector. Which would mean that if RIPEMD were weakened in any way, single-key transactions could suddenly become badly exposed.
Steve Davis via bitcoin-dev
2017-02-26 16:53:29 UTC
Permalink
Typical hash function breaks produce collision attacks, while a preimage attack is needed to reduce single-key address security.
Thank you Pieter - that was really helpful. I realize now that I was thinking of a preimage attack but had mistakenly assumed that the birthday bound applied...

So the unit operation: [genkeypair; ripemd160(sha256(pubkey));check_utxoset] would need to be performed 2.9*10^42 and not (as I had first calculated) 2.4*10^18.

Oops. My bad.
Leandro Coutinho via bitcoin-dev
2017-02-25 23:09:18 UTC
Permalink
If people split their bitcoins in multiple addresses, then maybe there
would be no need to worry(?), because the computational cost would be
higher than what the attacker would get.


From Google:
https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html

*Here are some numbers that give a sense of how large scale this
computation was: *

- *Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in
total*
- *6,500 years of CPU computation to complete the attack first phase*
- *110 years of GPU computation to complete the second phase*


https://bitinfocharts.com/top-100-richest-bitcoin-addresses.html
Richest address: 124,178 BTC ($142,853,079 USD)



On Sat, Feb 25, 2017 at 6:40 PM, Peter Todd via bitcoin-dev <
Yea, well. I don’t think it is ethical to post instructions without an
associated remediation (BIP) if you don’t see the potential attack.
I can't agree with you at all there: we're still at the point where the
computational costs of such attacks limit their real-world impact, which is
exactly when you want the *maximum* exposure to what they are and what the
risks are, so that people develop mitigations.
Keeping details secret tends to keep the attacks out of public view, which
might be a good trade-off in a situation where the attacks are immediately
practical and the need to deploy a fix is well understood. But we're in the
exact opposite situation.
I was rather hoping that we could have a fuller discussion of what the
best practical response would be to such an issue?
Deploying segwit's 256-bit digests is a response that's already fully coded and
ready to deploy, with the one exception of a new address format. That address
format is being actively worked on, and could be deployed relatively quickly if
needed.
--
_______________________________________________
bitcoin-dev mailing list
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
Peter Todd via bitcoin-dev
2017-02-25 20:57:06 UTC
Permalink
On Sat, Feb 25, 2017 at 11:12 AM, Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
Post by Peter Todd via bitcoin-dev
SHA1 is insecure because the SHA1 algorithm is insecure, not because
160bits isn't enough.
I would argue that 160-bits isn't enough for collision resistance. Assuming
RIPEMD-160(SHA-256(msg)) has no flaws (i.e. is a random oracle), collisions
That's something that we're well aware of; there have been a few discussions on
this list about how P2SH's 160-bits is insufficient in certain use-cases such
as multisig.
However, remember that a 160-bit *security level* is sufficient, and RIPEMD160
has 160-bit security against preimage attacks. Thus things like
pay-to-pubkey-hash are perfectly secure: sure you could generate two pubkeys
that have the same RIPEMD160(SHA256()) digest, but if someone does that it
doesn't cause the Bitcoin network itself any harm, and doing so is something
you choose to do to yourself.
P2SH is not secure against collision. I could write two scripts with
the same hash, one of which is an escrow script and the other which
pays it to me, have someone pay to the escrow script, and then get the
payment. Some formal analysis tools would ignore the unused
instructions even if human analysis would not.
That's what I said: "P2SH's 160-bits is insufficient in certain use-cases such
as multisig"

Obviously any usecase where multiple people are creating a P2SH redeemScript
collaboratively is potentially vulnerable. Use-cases where the redeemScript was
created by a single-party however are _not_ vulnerable, as that party has
complete control over whether or not collisions are possible, by virtue of the
fact that they're the ones who have to make the collision happen!

Similarly, even in the multisig case, commit-reveal techniques can mitigate the
vulnerability, by forcing parties to commit to what pubkeys/hashlocks/etc.
they'll use for the script prior to pubkeys/hashlocks/etc. being revealed.
Though a better long-term approach is to use a 256-bit digest size, as segwit
does.
--
https://petertodd.org 'peter'[:-1]@petertodd.org
Loading...