mersenneforum.org > Math VDF (Verifiable Delay Function) and PRP
 Register FAQ Search Today's Posts Mark Forums Read

2020-06-19, 04:21   #144
patnashev

"Pavel Atnashev"
Mar 2020

1001012 Posts

Quote:
 Originally Posted by Prime95 Thanks for joining the discussion. Your insights are invaluable.
Happy to share my experience!

Quote:
 Originally Posted by Prime95 So if I understand correctly, knowing h1...hn in advance allows cheating with 1/2 PRP test effort.
No, it allows to cheat with only first two intermediate points. Basically, for free. Pietrzak VDF is secure as long as you use hashes that depend on the current result. You can't do any proof calculations before the end of test.

2020-06-19, 05:28   #145
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

7,043 Posts

Quote:
 Originally Posted by patnashev No, it allows to cheat with only first two intermediate points.
Nuts! Do you have any brilliant ideas for reducing the amount of temp storage required?

Power = 10, Exponent = 100M, 12.5GB disk space. Gets worse as we test large numbers.

 2020-06-19, 05:48 #146 patnashev   "Pavel Atnashev" Mar 2020 37 Posts As a matter of fact, yes! I have an idea and already presented it. Don't go that deep. 6..8 is enough. Offload exponentiation at the final iteration to participants, there's a secure way to do it. You slash upload size in half, VDF overhead in half, with almost the same amount of computation necessary. Also temp storage size stops exploding. And the verification process becomes scalable, the server has to do only a small amount of work. At least consider this as an option. It's not the only way to do Pietrzak VDF, but it's an option.
2020-06-19, 06:35   #147
preda

"Mihai Preda"
Apr 2015

3·5·7·11 Posts

Quote:
 Originally Posted by Prime95 Now a question. Are we gaining anything by using a 64-bit h value? Why not 32 or 16 bits [...]
Let's decide the hash size. I think 64 bits is large enough and may be a bit overkill. Who has oppinion for/against setting the hash size to 32bits? (Personally I'm fine either way. If somebody would show an argument that 32bits is "weak" in some way, that would help make the decision).

The hash size only affects the producer of the proof; the proof generation overhead (i.e. on top of the PRP test) is roughly linear with the hash size.

 2020-06-19, 07:49 #148 patnashev   "Pavel Atnashev" Mar 2020 1001012 Posts In cheating scenario with u_i = const the probability of finding a solution increases with the decrease of hash size. Lower distance also helps. I personally like 64 bit, but that's a feeling. I also prefer hashes to not be divisible by small primes to decrease the probability of roots of unity messing around.
2020-06-19, 11:49   #149
kriesel

"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

109816 Posts

Quote:
 Originally Posted by preda Let's decide the hash size. I think 64 bits is large enough and may be a bit overkill. Who has opinion for/against setting the hash size to 32bits? (Personally I'm fine either way. If somebody would show an argument that 32bits is "weak" in some way, that would help make the decision). The hash size only affects the producer of the proof; the proof generation overhead (i.e. on top of the PRP test) is roughly linear with the hash size.
How related to exponent size is suitable hash size, and how bad would it be to use a hash suitable for exponents up to 232? Res15 and 16 got replaced with res64. Transitions have their costs. Remember Y2K and 640K ram.

2020-06-19, 15:23   #150
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

7,043 Posts

Quote:
 Originally Posted by preda Let's decide the hash size. I think 64 bits is large enough and may be a bit overkill. Who has oppinion for/against setting the hash size to 32bits? (Personally I'm fine either way. If somebody would show an argument that 32bits is "weak" in some way, that would help make the decision).
Compromise, 48-bits?

Algorithm:
x = sha-3-256(something)
x = x & (2^48 - 1)
if (x < 2^32) x += 2^48
x = x | 1
while (some combo of x divisible by 3,5,7 or x is composite or x == previous_hash_value) x += 2
return x

Last fiddled with by Prime95 on 2020-06-19 at 15:53

2020-06-19, 16:56   #151
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

704310 Posts

Quote:
 Originally Posted by patnashev Yes, this is exactly the scenario that has to be dealt with. Solution is simple: provide the verifier with x_t^random and expect y_t^random from him. Even if the verifier knows (x_t, y_t), he has to do discrete logarithm to obtain the random exponent and cheat. Not a small problem, to say the least. Computing x_t^random^distance is much easier.
So, let's see if I understand correctly (my track record is not too good there). Using preda's terminology in post 17, the server calculates A0 = 3^random and sends it to the verifier along with the proof. The verifier sends prp(A[N-1], topK/(2^N)) and B[N-1] to the server. The server compares prp(A[N-1], topK/(2^N)) to B[N-1]^random

Quote:
 This way the tester can securely verify his own test, which is a useful feature for projects with low participation.
Whoa. Did you just solve the bandwidth problem? Sounds way to good to be true. Why not have the PRP tester verify his own proof? Why send the massive proof to the server which then sends it to someone else? Total bandwidth required is one download of 3^random and 2 uploads of prp(A[N-1], topK/(2^N)) and B[N-1]

Last fiddled with by Prime95 on 2020-06-19 at 17:05

2020-06-19, 17:28   #152
patnashev

"Pavel Atnashev"
Mar 2020

37 Posts

Quote:
 Originally Posted by Prime95 So, let's see if I understand correctly (my track record is not too good there). Using preda's terminology in post 17, the server calculates A0 = 3^random and sends it to the verifier along with the proof.
No. The server performs all calculations in that post until
prp(A[N-1], topK/(2^N)) == B[N-1]
Only this one is offloaded to a verifier. In Pietrzak terminology A[N-1] = x_t, B[N-1] = y_t, and let's call 2^(topK/(2^N)) = distance. So, the server sends A[N-1] (one number, not the whole proof) to a verifier and asks to perform topK/(2^N) modular squarings. The verifier returns the result, it doesn't have to be the whole number, res64 is enough. The server compares the res64 to res64(B[N-1]) and if it matches, the test is verified. The verifier makes no decisions, just performs a few iterations.

Quote:
 Originally Posted by Prime95 Why not have the PRP tester verify his own proof? Why send the massive proof to the server which then sends it to someone else?
Computation of (x_t, y_t) have to be done by a trusted authority (otherwise it can be just a random number and its ^distance pair). That's why I call this process "certification". And x_t becomes a "certificate" which can be validated by anyone. (For clarity, I omit the additional step of ^random, but it is there, in the certificate).

 2020-06-19, 17:37 #153 patnashev   "Pavel Atnashev" Mar 2020 37 Posts My certification/validation scheme allows to securely shift computation between parties with a simple goal to limit depth to 6..8, limit bandwidth, limit server load and limit temporary storage.
2020-06-19, 19:57   #154
Prime95
P90 years forever!

Aug 2002
Yeehaw, FL

1B8316 Posts

Quote:
 Originally Posted by patnashev No. The server performs all calculations in that post until prp(A[N-1], topK/(2^N)) == B[N-1] Only this one is offloaded to a verifier. In Pietrzak terminology A[N-1] = x_t, B[N-1] = y_t, and let's call 2^(topK/(2^N)) = distance. So, the server sends A[N-1] (one number, not the whole proof) to a verifier and asks to perform topK/(2^N) modular squarings. The verifier returns the result, it doesn't have to be the whole number, res64 is enough. The server compares the res64 to res64(B[N-1]) and if it matches, the test is verified. The verifier makes no decisions, just performs a few iterations.
Ah, that makes much more sense. Bonus, the server upon receiving the big proof file can immediately perform the steps to calculate A[N-1], B[N-1] and discard all the middle values in the proof file, saving server disk space. Even lop B[N-1] down to res64 bits saving more space.

In your opinion, is there any security risk in sending A[N-1] to the original tester? In other words, should we apply the "^random" operation on the server just to be safe?

The security risk becomes the server database if res64(B[N-1]) is stored in the database in plaintext. That is easily solved.

 Similar Threads Thread Thread Starter Forum Replies Last Post rula Homework Help 3 2017-01-18 01:41 ixfd64 PrimeNet 7 2008-10-20 20:45 JHagerson Forum Feedback 1 2006-05-13 21:30 vaughan ElevenSmooth 5 2005-09-08 17:17 ltd Prime Sierpinski Project 10 2005-08-08 13:38

All times are UTC. The time now is 23:19.

Fri Aug 14 23:19:29 UTC 2020 up 1 day, 19:55, 0 users, load averages: 1.06, 1.09, 1.13