Re: Validity of hashing, stuff about entropy

New Message Reply About this list Date view Thread view Subject view Author view

Mike Rosing (eresrch@msn.fullfeed.com)
Sun, 12 Jul 1998 21:04:20 -0500 (CDT)


On Mon, 13 Jul 1998, Anonymous wrote:

> With pretty much any source of randomness, if you know what the biases
> are, you can write code to simulate the source, so do so. Write a program
> that takes 160 bits of input and outputs stuff that looks just like the
> stuff the source of randomness outputs, different for each input.

Exactly, The smart attacker with know the generator.

> If you have some method which can yield a guess at the hash of your
> simulation's output which is correct with a probability of more than
> 1-((2^160-1)/(2^160))^((simulation time/hash time)+1) or so (don't trust
> my math; it will betray you if you do) then just watching the hashed
> outputs of the simulation with different inputs for a collision will yield
> a collision faster than a birthday attack.
>
> With any good hash function, nothing can find a collision faster than a
> birthday attack can. Therefore, your best guess at the hashed output can't
> have probability greater than
> 1-((2^160-1)/(2^160))^((simulation time/hash time)+1), and there are,
> for cryptographic purposes, at least
> log(1/(1-(((2^160-1)/(2^160))^((simulation time/hash time)+1)))/log(2)
> bits of entropy in the result.

I guess I don't see why the ratio of sim time to hash time is useful.
Presumably the attacker can duplicate the generator and feed that to the
hash. They have to do it many times to find the collision.

Do you mean a simulator that is faster than a duplicate? That would be
very cool, and could reduce entropy a lot if the number was big enough
(since it's in the exponent).

> The number of bits of entropy in what everybody seems to call a "system"
> is the sum of, for each possible state, the product of its probability
> and a base-two logarithm of that probability.

For binary computers, that's a fact.

> It might be more convenient to say that a "system" has n bits of entropy
> when that same observing, interfering attacker has to make an average of
> 2^(n-1) guesses at its state before getting it right. When there are only
> 2^n possible states, this means that the most likely state has probability
> no greater than 2^-n.

And we want n to be as big as the hash function (160 bits). What sort of
started this was a comment about "mixing" sources. If the mixing process
is poor, it reduces n which makes the attackers task easier. Actually, n
~ 85 should be sufficent for the next 50 years. But making sure the
average user gets that many bits of random selection is not simple.

Can we measure entropy as we gather random bits? A good model and a lot
of tests will work, but can it be done on the fly to ensure users have
enough random states entered? I think that would be hard, but I haven't
tried to do it either.

Patience, persistence, truth,
Dr. mike


New Message Reply About this list Date view Thread view Subject view Author view

 
All trademarks and copyrights are the property of their respective owners.

Other Directory Sites: SeekWonder | Directory Owners Forum

The following archive was created by hippie-mail 7.98617-22 on Fri Aug 21 1998 - 17:20:18 ADT