![]() |
[QUOTE=Prime95;548529]Bonus, the server upon receiving the big proof file can immediately perform the steps to calculate A[N-1], B[N-1] and discard all the middle values in the proof file, saving server disk space. Even lop B[N-1] down to res64 bits saving more space.[/QUOTE]
Yes, exactly! No need to store the full proof. Process it immediately and queue the certificate for validation. [QUOTE=Prime95;548529]In your opinion, is there any security risk in sending A[N-1] to the original tester?[/QUOTE] Yes, the risk is real. It's not only the cheaters who want to fake the original test, it's also those lazy ones who rather return their stored B[N-1] than perform a few iterations. And the B[N-1] can be wrong due to hardware error at proof generation stage, which will be detected by the consensus of good verifiers. Applying random exponent to both sides of the certificate takes little time and basically "encrypts" the certificate for the original tester. (A[N-1]^random, res64(B[N-1]^random)) |
[QUOTE=Prime95;548529]Bonus, the server upon receiving the big proof file can immediately perform the steps to calculate A[N-1], B[N-1] and discard all the middle values in the proof file, saving server disk space. Even lop B[N-1] down to res64 bits saving more space.
[/QUOTE] OK this sounds good! It also generalizes nicely, e.g.: there is a new work type, consisting of the tuple: <candidate, start-residue, number-of-squarings> The client of such a task would run "number-of-squarings" starting from "start-residue" modulo "candidate". (here "candidate" is a general expression of the candidate a* b^c +d [QUOTE=Prime95;547887]Examples: M1234567, 653*3^123456-15[/QUOTE]) The result (what the server receives back) of executing such a work-type is, at the server's choice, either the res64 (res128?) or the full final residue. Such a work-type would encompass both the classic PRP and the proof "certificate verification". It also allows splitting large PRP tests in smaller pieces, which should address the problem of "abandoned PRP tests" (where a large test is started by a new participant, progresses to e.g. 20% and is abandoned afterwards, work lost; it might even allow useful work to be extracted from the "just heat my CPU to the max for 1hour" system-testing mprime users.) The drawback, as always, is that full residues are large -- 12MB for 100M exponent (the encoding of the "residue" would support representing small values like "3" without paying the space). Advantages: - supports proof "certificate verification" - supports classic mersenne PRP - allows splitting one PRP test in multiple sequential tasks - allows a stronger form of the "offset/shift" protection, secure and without client changes (using the residue^random trick) (side question: this might be a good time to switch to res128 from res64) |
[QUOTE=preda;548550](side question: this might be a good time to switch to res128 from res64)[/QUOTE]
I was thinking a 256-bit SHA-3 result. |
[QUOTE=preda;548550]
<candidate, start-residue, number-of-squarings> The client of such a task would run "number-of-squarings" starting from "start-residue" modulo "candidate". The result (what the server receives back) of executing such a work-type is, at the server's choice, either the res64 (res128?) or the full final residue. [/QUOTE] The result would be, at the server's choice, one of: 1. hash of final residue (such as: sha3-256, previously res64) 2. full final residue 3. proof If "proof" is requested, the "number-of-squarings" included in the task must be a multiple of 2^power (where "power" can be included in the task or fixed before-hand). Edit: in fact case 2 above, "full final residue", is just a particular situation of Proof with power==0. So it doesn't need special handling, reducing to just two cases. |
[QUOTE=Prime95;548504]Compromise, 48-bits?
Algorithm: x = sha-3-256(something) x = x & (2^48 - 1) if (x < 2^32) x += 2^48 x = x | 1 while (some combo of x divisible by 3,5,7 or x is composite or x == previous_hash_value) x += 2 return x[/QUOTE] I would like to see a clearer explanation of why 32bits won't work, or why it does work. Absent that, I'd prefer to stick with 64, and without the hacks. I don't like the "hash hardening" scheme -- what attack specifically does it prevent? In particular I don't get why what factors the hash has, or whether it has a small value, matters at all. Let's consider "the worst possible of all" hash value, zero. The attacker knows that a hash value of 0 is great for him, because it removes the left half from the verification, i.e.: (A^h *M)^q == M^h *B becomes when h==0: M^q==B. All the attacker has to do to take advantage of this presumed weakness is this: 1. take some M (by guess, random, or sequential) 2. compute B by raising M to q -- doing a significant number of PRP iterations, let's say at least 1000 iterations in all cases. 3. compute SHA3(B), check low32==zero 4. repeat. For a 32bit hash, the expected number of repeats of the above loop until zero is found is 2 billion (i.e. half of 2^32), which multiplied with 1000 PRP iterations comes to 2 trillion PRP iterations. I say, not a practical attack, [much] more expensive than "just run the damn test". Not to mention what the work would be with a 64bit hash.. |
[QUOTE=preda;548579]... 2 trillion PRP iterations. I say, not a practical attack ...[/QUOTE]... today.
But in a few years perhaps 2T iterations becomes easy? I'd say, let's not go down the 32-bit route. From past history 32-bits used to be considered enough-to-make-things-impossible. But look what happened, "suddenly" 32-bits became almost trivial as computers became more powerful. |
[QUOTE=preda;548579]
Let's consider "the worst possible of all" hash value, zero. The attacker knows that a hash value of 0 is great for him, because it removes the left half from the verification, i.e.: (A^h *M)^q == M^h *B becomes when h==0: M^q==B. [/QUOTE] You can easily avoid this just set hash=2*hash+1, even if you'd this for every computed hash then no hash value will be zero among h_i. |
[QUOTE=R. Gerbicz;548581]You can easily avoid this just set hash=2*hash+1, even if you'd this for every computed hash then no hash value will be zero among h_i.[/QUOTE]
Yes, but I'm not convinced that anything is gained (that the security is improved) by doing that transformation, thus I'd prefer to not do it. There was no mention in the paper of the author being afraid of particular hash values, I really don't think that's the weak point. |
As a matter of fact, the proof is not unique. I ran simulations at small numbers and found some ways to decrease the number of parasitic proofs. First of all, check that x_t != 0 and gcd(x_t, N) = 1. This check can't fail in normal circumstances, but hardware errors can make it happen. Then there're a lot of proofs that differ from each other only by roots of unity modulo N. Since we do exponentiations, roots of unity just disapper from the calculated values. And while the probability of encountering one is low, and it's not practical to look for them on purpose, I feel more comfortable when hashes do not contain small divisors, thus reducing the probability of gcd(hash, phi(N)) != 1.
|
[QUOTE=patnashev;548586]As a matter of fact, the proof is not unique. I ran simulations at small numbers and found some ways to decrease the number of parasitic proofs. First of all, check that x_t != 0 and gcd(x_t, N) = 1. This check can't fail in normal circumstances, but hardware errors can make it happen. Then there're a lot of proofs that differ from each other only by roots of unity modulo N. Since we do exponentiations, roots of unity just disapper from the calculated values. And while the probability of encountering one is low, and it's not practical to look for them on purpose, I feel more comfortable when hashes do not contain small divisors, thus reducing the probability of gcd(hash, phi(N)) != 1.[/QUOTE]
We don't require the proof to be unique. We're only concerned with having the correct final residue, that is contained in the proof. In fact, this is the statement of the proof: starting from A, doing topK iterations, you reach B. The "middles" contained in the proof are there just to offer an easy way to check the statement. Do you see a way to fake the final residue? Can you explain in a bit more detail what is the problem with the roots of unity? How easy can they be discovered, and how can they be used for an attack? PS: I realize A is always 3 in the current setup, and as such is not included in the proof. Maybe we should generalize and allow any starting point, but that's a different topic. |
[QUOTE=preda;548590]Do you see a way to fake the final residue?[/QUOTE]
No. [QUOTE=preda;548590]Can you explain in a bit more detail what is the problem with the roots of unity? How easy can they be discovered, and how can they be used for an attack?[/QUOTE] Besides attacks, there are also hardware and software errors. And we're much more likely to see them than an attack. |
| All times are UTC. The time now is 14:45. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.