![]() |
|
|
#122 |
|
"Pavel Atnashev"
Mar 2020
1011002 Posts |
No, I was talking about the "trick" preda was going to implement, which actually saves nothing and adds network traffic.
|
|
|
|
|
|
#123 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
536210 Posts |
Quote:
And thanks for the Efficient Proth/PRP Test Proof Scheme thread https://mersenneforum.org/showthread...633#post538633 Last fiddled with by kriesel on 2020-06-17 at 16:39 |
|
|
|
|
|
|
#124 |
|
"Pavel Atnashev"
Mar 2020
22×11 Posts |
He proposes to calculate Pietrzak VDF u_i values using stored residues until reaching a certain depth. This is fast. But then he proposes to go deeper by calculating the next u value (the "middle") by directly performing half of iterations. This step decreases computation of the (future) verifier by exactly the same half of iterations. Zero sum. But that u value needs to be uploaded, and that's an additional 10MB of network traffic.
|
|
|
|
|
|
#125 | ||
|
"Robert Gerbicz"
Oct 2005
Hungary
22·7·53 Posts |
Quote:
(A^h*M)^(2^(top/2)) == M^h*B mod N but if you fix M then you can choose B in a unique way, so it has N solutions, but that is still nice for us, that means you can fake it with O(1/N) probability, since you have N^2 pairs for (M,B). Quote:
I can verify the proof using O(E*p) memory for N=2^p-1, but needing 2^E residues for this, the 3^(2^(t*i/2^E)) mod N, where t~p (the next multiple of 2^E after p). Last fiddled with by R. Gerbicz on 2020-06-17 at 19:21 Reason: small typo |
||
|
|
|
|
|
#126 | |
|
"Pavel Atnashev"
Mar 2020
22×11 Posts |
Quote:
|
|
|
|
|
|
|
#127 | |
|
"Robert Gerbicz"
Oct 2005
Hungary
22·7·53 Posts |
Quote:
Last fiddled with by R. Gerbicz on 2020-06-17 at 20:20 Reason: typo |
|
|
|
|
|
|
#128 | |
|
"Pavel Atnashev"
Mar 2020
2C16 Posts |
Quote:
All of this is trivial to compute unless h = hash(B), which is the cornerstone of Pietrzak VDF security. |
|
|
|
|
|
|
#129 | |
|
"Mihai Preda"
Apr 2015
3·457 Posts |
Quote:
1. The proof is produced once, but is verified multiple times, at least 2 times or maybe more. 2. There might be some asymmetry between the capacity of the producer and that of the verifier. You may have seen in the discussion that the push for high-power proofs (up to power==12) is driven by the desire to have a centralized verifier ("the server") which removes the need to download the proofs for verification by discrete participants. In this setup it is worthwile to push some work to the producer just to make it easier for the server. |
|
|
|
|
|
|
#130 |
|
"Pavel Atnashev"
Mar 2020
22×11 Posts |
Okay, this really is up to architectural choices you make. I just want to discuss alternatives, so you make the choice knowing the options.
You definitely can calculate the proof up to a great depth, and store it for all eternity to show to anyone willing to verify the test. Although I see no practical reason to do so if there is a central authority ("the server") which coordinates the search for primes. Storing hundreds MB per test looks like a burden. Also, having a centralized verifier requires significant computation power and may not scale well. Did you run tests to see how much time it takes to compute (x_12, y_12) from u_i and verify x_12 -> y_12 ? One option to decrease the load on server is to offload the verification of (x_t, y_t) to a user, who really needs to download only x_t and perform the necessary amount of iterations. Personally, I see no reason to go deeper than 8. The amount of data involved becomes huge without really speeding up the search for primes. It's really an optimization issue. I see bandwidth, storage and server load as parameters important enough to optimize. And the optimal strategy for numbers with 30M digits can definitely be different than for numbers with 1M digits. |
|
|
|
|
|
#131 | ||
|
Jun 2003
13BB16 Posts |
Quote:
Quote:
The only downside with higher values is the amount of bandwidth consumed while uploading the files. EDIT:- For regular users, it might be ok, especially if you spread out the uploads during the course of the test. But for institutional users with 100s of clients, this might be a major system issue. Last fiddled with by axn on 2020-06-18 at 05:01 |
||
|
|
|
|
|
#132 | |
|
"Pavel Atnashev"
Mar 2020
22·11 Posts |
Quote:
Malicious verifier can only produce false negative. But the server just sends x_t to the next verifier, until consensus is reached. And if the result is still negative, which is extremely rare, some actions on the server need to be done to verify it. Malicious verifier can't produce false positive. There are additional steps to ensure that. You can't do that. All the proof data can be computed only at the end of the test. It's critically important. |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| phi function | rula | Homework Help | 3 | 2017-01-18 01:41 |
| delay in crediting? | ixfd64 | PrimeNet | 7 | 2008-10-20 20:45 |
| Why delay between posts? | JHagerson | Forum Feedback | 1 | 2006-05-13 21:30 |
| Minimum delay between server connections | vaughan | ElevenSmooth | 5 | 2005-09-08 17:17 |
| Stats delay | ltd | Prime Sierpinski Project | 10 | 2005-08-08 13:38 |