mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Math (https://www.mersenneforum.org/forumdisplay.php?f=8)
-   -   VDF (Verifiable Delay Function) and PRP (https://www.mersenneforum.org/showthread.php?t=24654)

patnashev 2020-06-24 03:49

It's a model to calibrate your hashes against. [STRIKE]Btw, x^(h0*h0) is not modular and can be precomputed.[/STRIKE]

Prime95 2020-06-24 19:48

1 Attachment(s)
Some data (now using the attached sliding window exponentiation routine)

Proof level 8:
Proof generator does 15489 or 20632 squarings (48-bit vs 64-bit hash)
Server does 1080 or 1414 squarings
Proof verifier does 390625 squarings (assuming 100,000,000 exponent)

Proof level 9:
Proof generator does 31485 or 41925 squarings (48-bit vs 64-bit hash)
Server does 1207 or 1577 squarings
Proof verifier does 195312 squarings (assuming 100,000,000 exponent)

From a total system point of view, we can see that proof level 10 is currently optimal. Compared to level 9, generator does 42K more squarings to save the verifier 97K squarings.

If 800 PRP tests a day are reported to the server, I think it can handle 1.2M squarings a day. My quad core Haswells can generate 10 million squarings a day.

For me, at proof level 9, the 10500 squarings saved for 48-bit vs. 64-bit hash represents 1/2 PRP test a year.

Prime95 2020-06-24 20:02

[QUOTE=preda;548954]The attack would be so-much-more less practical for 48 or 64 bit hash. So, I'd take the above attack more like an indication of "lack of a practical attack", right?[/QUOTE]

In choosing the hash function, we are really guarding against a researcher discovering a weakness in the system that is not presently known.

If the future weakness is a reduction in brute force effort, then a longer hash key is our safe guard. So let's rule out 32-bit hash values.

If the future weakness revolves around a small hash value, let's thwart them by making all hash values >= 2^32.

If the future weakness results from some root-of-unity issue, lets rule out multiples of the PRP base 3. I'd remove multiples of two just for good measure. Removing hashes with more small primes is also possible. In total, this does not greatly reduce the search space for the brute force attacker.

I'm happy with 48-bit or 64-bit (or anything in-between!). When eliminating 0 mod 3 hashes, the scheme should not be a simple "add 2" as that would favor 2 mod 3 hashes.

Further comments? Time to come up with the concrete algorithm?

wedgingt 2020-06-24 20:39

Avoiding multiples or 2 or 3 in hash values
 
If you just want to eliminate values that are multiples of 2 or 3:
[code]
int add[2*3] = { 1, 0, 3, 2, 1, 0 };
value += add[value % 6];
[/code]The value will always be either 1 or 5 mod 6 after this but with a bias towards 5 % 6. Changing one value in the array appropriately would eliminate the bias.
To avoid possibly exceeding 2^64, the array value could be subtracted instead, sometimes leading to a final value < 2^32.
If you also want to eliminate multiples of 5, expand the array to 2*3*5 appropriately.
-- Will



[QUOTE=Prime95;549023]In choosing the hash function, we are really guarding against a researcher discovering a weakness in the system that is not presently known.

If the future weakness is a reduction in brute force effort, then a longer hash key is our safe guard. So let's rule out 32-bit hash values.

If the future weakness revolves around a small hash value, let's thwart them by making all hash values >= 2^32.

If the future weakness results from some root-of-unity issue, lets rule out multiples of the PRP base 3. I'd remove multiples of two just for good measure. Removing hashes with more small primes is also possible. In total, this does not greatly reduce the search space for the brute force attacker.

I'm happy with 48-bit or 64-bit (or anything in-between!). When eliminating 0 mod 3 hashes, the scheme should not be a simple "add 2" as that would favor 2 mod 3 hashes.

Further comments? Time to come up with the concrete algorithm?[/QUOTE]

henryzz 2020-06-24 21:25

[QUOTE=Prime95;549021]Some data (now using the attached sliding window exponentiation routine)

Proof level 8:
Proof generator does 15489 or 20632 squarings (48-bit vs 64-bit hash)
Server does 1080 or 1414 squarings
Proof verifier does 390625 squarings (assuming 100,000,000 exponent)

Proof level 9:
Proof generator does 31485 or 41925 squarings (48-bit vs 64-bit hash)
Server does 1207 or 1577 squarings
Proof verifier does 195312 squarings (assuming 100,000,000 exponent)

From a total system point of view, we can see that proof level 10 is currently optimal. Compared to level 9, generator does 42K more squarings to save the verifier 97K squarings.

If 800 PRP tests a day are reported to the server, I think it can handle 1.2M squarings a day. My quad core Haswells can generate 10 million squarings a day.

For me, at proof level 9, the 10500 squarings saved for 48-bit vs. 64-bit hash represents 1/2 PRP test a year.[/QUOTE]

What are the current estimates of storage and bandwidth requirements of each of these levels?

preda 2020-06-24 22:54

[QUOTE=Prime95;549023]In choosing the hash function, we are really guarding against a researcher discovering a weakness in the system that is not presently known.

If the future weakness is a reduction in brute force effort, then a longer hash key is our safe guard. So let's rule out 32-bit hash values.

If the future weakness revolves around a small hash value, let's thwart them by making all hash values >= 2^32.

If the future weakness results from some root-of-unity issue, lets rule out multiples of the PRP base 3. I'd remove multiples of two just for good measure. Removing hashes with more small primes is also possible. In total, this does not greatly reduce the search space for the brute force attacker.

I'm happy with 48-bit or 64-bit (or anything in-between!). When eliminating 0 mod 3 hashes, the scheme should not be a simple "add 2" as that would favor 2 mod 3 hashes.

Further comments? Time to come up with the concrete algorithm?[/QUOTE]

A more complicated hash does not make a better hash (but it can make a weaker hash if not careful). I think it's Knuth who said something on the lines of "a random algorithm does not make a good random number generator". It's unlikely to obtain any security benefit by doing "hardening in the dark" without knowing why or what we're hardening against.

I propose we use simply SHA3-256 truncated to 64bits for the "h" exponents. The chaining of the hash OTOH is done using the full SHA3-256.

Maybe we should also present our "simple" hash scheme to the larger crypto community and ask them for an attack?

kriesel 2020-06-25 14:14

[QUOTE=preda;549031]
[STRIKE]Maybe[/STRIKE] we should also present our "simple" hash scheme to the larger crypto community and ask them for an attack[STRIKE]?[/STRIKE][/QUOTE]Get best available informed input, yes.
I think doing that repeatedly is one of the things that has made prime95 and gpuowl as good as they already are.

patnashev 2020-06-28 05:48

We've started searching for GFN-15 Mega (b^32768+1, 1M digits). b is a hundred-bit number, but Pietrzak VDF works just fine with such numbers.

kriesel 2020-07-13 15:27

[QUOTE=R. Gerbicz;523833]In fact these methods works also for the standard LL test, so there is an incredible fast "proof" to convince ourselves in just only O(sqrt(p)) multiplication, if we limit the exponents in the proof say in 64 bits.

Just let
H(n)=(2+sqrt(3))^(2^n) mod Z[sqrt(3),mp].
LL says that mp is prime iff H(p-2)=0 (where here we omit the sqrt(3) stuff).

Ofcourse we had to use this ugly sqrt(3) part also to enable the various fast checks for the powers in H(), if we would keep the conjugate part then
we lose the power.

The error checks goes in the usual way, but in the above ring, not the friendly Z.
This is somewhat interesting, I mean having a fast "almost" true primality check. One drawback is the high storage of the ~sqrt(p) residues. (there is a tradeoff here, you can lower the storage by increasing the computation time).[/QUOTE]
Please elaborate. (In a way that could be clear to those of us that don't know much about rings.)

The LL test goes as before, but sqrt(p) residues are saved along the way?
What is Z? Notation of a ring?
What is the process by which those sqrt(p) saved residues are processed to produce a verification of correctness?
if it works, this seems to me to offer a more convincing proof of correctly finding a Mersenne prime, than matching zero residues in multiple runs. At the cost of more programming effort to adopt it.

kruoli 2020-07-15 08:34

Hi, I have activated the proof feature on my run of M215856353, are you interested in the data? If yes, I'd upload it. It was done using v6.11-325-g7c09e38 with [c]-proof 10[/c].

preda 2020-07-15 12:14

[QUOTE=kruoli;550650]Hi, I have activated the proof feature on my run of M215856353, are you interested in the data? If yes, I'd upload it. It was done using v6.11-325-g7c09e38 with [c]-proof 10[/c].[/QUOTE]

I think there may have been some changes to the proof file format since that version. If you still have around the temporary residues that are stored under 215856353/proof/ , you could use a recent version of gpuowl to re-generate the proof in the new format. To do this run:
./gpuowl -prp 215856353 -proof 10
which should re-start the exponent 215856353 from the last checkpoint which is very near 100%, run the few last iterations, and regenerate the proof in the new format.

The proof can't be uploaded yet, so you'd still need to keep it around for a bit.


All times are UTC. The time now is 14:45.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.