![]() |
|
|
#78 | |
|
"Robert Gerbicz"
Oct 2005
Hungary
22×7×53 Posts |
Quote:
Code:
? check_proof(921,151,997,500) Test#1: h=594 passed the test Test#2: h=716 failed the test Test#3: h=471 passed the test Test#4: h=538 failed the test Test#5: h=211 failed the test Test#6: h=550 failed the test Test#7: h=510 passed the test Test#8: h=376 failed the test Test#9: h=679 failed the test Test#10: h=958 failed the test Test#11: h=843 passed the test Test#12: h=830 failed the test Test#13: h=911 failed the test Test#14: h=563 failed the test Test#15: h=481 failed the test Test#16: h=243 passed the test Test#17: h=676 failed the test Test#18: h=236 failed the test Test#19: h=46 failed the test Test#20: h=937 failed the test Last fiddled with by R. Gerbicz on 2020-06-08 at 18:18 Reason: grammar |
|
|
|
|
|
|
#79 | |
|
"Mihai Preda"
Apr 2015
3·457 Posts |
Quote:
https://github.com/preda/gpuowl/wiki/PRProof-File-Spec https://github.com/preda/gpuowl/wiki/Proof-Verification The file changes are: - topK is above E. topK is defined as the lowest multiple of 1024 larger than E. As such, it is not stored in the proof file anymore (because it's derived from E), and also is not part of the hash anymore (because there's no choice over its value). - the "classic" res64 is not part of the proof file anymore. The verification changes are: once the proof is verified, i.e. the residue at iteration topK is proved correct, a number of topK-E iterations can be run from 9 (or, equivalently, topK-E+1 iterations run from 3) to discover the expected residue at iteration topK for a "probable prime". Comparing the proof residue "B" at iteration topK with this computed residue decides whether the proof corresponds to a probable-prime or to a composite. Because topK is after E, the proof does not allow to obtain the "classic" res64 type-1 or type-4 anymore. Also the gpuowl code has been updated with these changes. |
|
|
|
|
|
|
#80 |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
For a 96.5M exponent (around the current wavefront), generating a proof of power 9 takes about 4minutes (on Radeon VII). Verifiying the same proof takes E/512 i.e. 188540 iterations, which on the same GPU comes to ~2min. This seems to me around the optimum, considering that a proof is normally verified twice.
If instead of 9, a power==8 is used, the proof generation time is halved to 2min, but the proof verification time is doubled to 4min. If instead of 9, a power==10 is used, the generation is (a bit more than) doubled let's say to 9minutes, and the verification in halved. Also the temporary disk space needed for residues while the PRP is proceeding is doubled to 1024 residues. The effort ratio generation-to-verification should stay roughly the same as the exponents increase. The size of the proof file is linear to power+1; thus not a big difference there between powers 8 to 10. A verifier would not normally want to verify a proof of power 7 or less, because it would be too expensive for him (relative to verifying a proof of power 8-10). The max power that is allowed by the current spec is 10 (this limitation comes from mandating topK to be equal to E rounded up to a multiple of 1024). This leaves the options for practical proof powers to 8, 9, and 10. I wonder whether we should allow this choice (which probably would be the tester's choice), or whether we should mandate a proof power of e.g. 9 and stop worrying about it. A nice property of the current proof plan is that a proof of a given power is unique for an exponent -- any producer should produce exactly the same proof (assuming correct computation). If the power flexibility is removed, the (correct) proof becomes unique for the exponent (which may simplify things a bit for primenet's integration of proofs). Last fiddled with by preda on 2020-06-11 at 12:08 |
|
|
|
|
|
#81 |
|
"Mihai Preda"
Apr 2015
25338 Posts |
The simplest solution is centralized management of proofs by primenet. Let's see how big of a cost that represents.
1. The PRP tester The PRP tester, if he chooses to generate a proof, would need to upload it to primenet together with the PRP result. Let's say about 120MB for an 100M exponent, proof size growing proportionally with the exponent. For example a heavy producer that currently does 8 PRP tests per day, would need to upload 1GB per day. An extreme tester that does 400 PRP tests per day, would need to upload 50GB per day (apparently 50GB transferred out of AWS costs about $5 -- assuming the tester runs in AWS and the primenet server outside). 2. The verifier The verifier would need to download a proof of size 120MB, followed by running 200K iterations on it for verification. 3. The primenet server Assuming that primenet handles 1000 PRP proofs per day two-way (1upload+1download), that would be a traffic of 250GB per day, or 7.5TB per month. The server would also need storage. Once the server receives a proof, it would store it into a queue of proofs "available for verification" and would offer it to verifiers. Once a proof is sufficiently verified (twice, one verification originating from the original PRP tester), the proof can be deleted. Thus the server actually needs to store only the proofs that have not been verified yet. A 10TB HDD could store 80'000 proofs. Of course I'm not saying that any of the above is easy or free. I'm just laying out in plain terms what are the implications of the simplest setup. People that are more in the know could express oppinions about the feasibility of such a setup. Last fiddled with by preda on 2020-06-11 at 12:34 |
|
|
|
|
|
#82 | |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
3×5×7×59 Posts |
Quote:
I think two separate independent verifications are needed. And from two different classes of software. The reason being that if verification code A has a bug, we don't want Evil Mallory to be able to construct proofs quickly that fool verification code A into confirming a bogus proof. So we add in verification code B, from a different code suite, to do another verification. |
|
|
|
|
|
|
#83 | |
|
"Mihai Preda"
Apr 2015
25338 Posts |
Quote:
But for comparison, for classic-DC we only require *one* independent DC. Even with only *one* independent verification, the proof would be at the same level with the current DC. The secondary self-verification is present only because it's so cheap that it's hard to find a reason to not do it -- the PRP tester already has the proof locally. Also, it would be in the tester's own interest to validate the just-produced proof (by verifying it) before incuring the upload/download cost. But yes, probably the server may want to do at least two independent verifications in the beginning, ideally with different software etc -- and all this in addition to the self-verification done by the PRP tester himself. There is value in the self-verification of the newly-produced proof: first, it makes sure that the server should never receives invalid proofs (wrongly generated). Second, it does validate very strongly the result for the PRP tester himself (against hardware and a large class of software errors). It does not validate the result against a mallicious PRP tester from the server point of view, but that's what the independent verification is there for. Personally I would like the GPU results to be independently-verified on CPU, and vice-versa. Last fiddled with by preda on 2020-06-11 at 13:14 |
|
|
|
|
|
|
#84 | |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
3·5·7·59 Posts |
Quote:
Your proposal is that we generate only one proof. So to balance that we need (at least) two verifications (with different software etc.). Someone self-verifying before uploading is fine, of course, but it doesn't count as a verification to any external observer. Evil Mallory can claim any number of self-verifications, but it doesn't mean any were actually done. Last fiddled with by retina on 2020-06-11 at 13:25 |
|
|
|
|
|
|
#85 | ||
|
"Mihai Preda"
Apr 2015
25338 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#86 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
3·5·7·59 Posts |
The difference is that now: the server verifies and clients create independent proofs.
But now you want one proof only. And if there is a verification software bug then the whole house of cards falls down. Result: We need more than one verification. You can't avoid this IMO. |
|
|
|
|
|
#87 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
165468 Posts |
I have some catching up to do on this thread. So my comments may not make sense.
As to whether 2 proof verifications are needed, should we hold PRP verification to a higher standard than the current scheme? The current scheme has many weaknesses. Mr. bad guy could create 2 accounts and self-verify. Prime95 could have an obscure bug and be used for both the first and DC runs. There are other holes. It is a system largely built on trust. I see a coding bug in the verifier as minimal risk. I assume any serious bug would spit out "original run no good" errors. Plus, I assume the verifier source will be available for inspection. |
|
|
|
|
|
#88 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
2·53·71 Posts |
Is it feasible for a central server to do all the verifications? at what cost?
Benefits include halving the bandwidth requirements, remove the worry of a malicious verifier. If there are 500 PRP tests a day coming in, and we went with power = 10, the server would need to do 50M iterations a day. Is AWS the best solution? |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| phi function | rula | Homework Help | 3 | 2017-01-18 01:41 |
| delay in crediting? | ixfd64 | PrimeNet | 7 | 2008-10-20 20:45 |
| Why delay between posts? | JHagerson | Forum Feedback | 1 | 2006-05-13 21:30 |
| Minimum delay between server connections | vaughan | ElevenSmooth | 5 | 2005-09-08 17:17 |
| Stats delay | ltd | Prime Sierpinski Project | 10 | 2005-08-08 13:38 |