![]() |
You got me wrong, I am not "against" improving the current situation (which indeed, kinda sucks! and this is known for long time, and it was discussed repeatedly on the forum!). My objection is ONLY to the phrase I quoted. I highly salute your initiative and effort, and I am convinced you CAN improve the things (as explained below). I just want to stress out that you CANNOT eliminate DC with things like GC, or VDF.
A proof of work is easy to implement, but that only shows that you did the work, not that the work is correct. If an unlucky bit flips, you are screwed, and yet the proof of work is valid. On the other hand, any residue-like information you got after any kind of test you do, which can be verified in x% of the time of a test as being correct, can also be [U]produced[/U] in about the same x% of the time of a test, without doing the test, just following the same steps that the "verifier" would do. If you can do that, then you just discovered a new testing method that only takes x% of the LL/PRP tests, and well, who will need LL/PRP anymore? GC, by the way, is just a probabilistic stuff, and accepting it without a DC is the same as accepting a probable prime as being prime without a rigorous proof. Actually, a bit worse, probabilistic. So, yes, PLEASE make the LL/PRP/whatever tests, more error proof, more cheat proof, more whatever proof, we will help where we can, like we can help with testing, even give you access to our toys, or even help with math, but don't expect to eliminate DCs until your "tests" can also produce a factor. Maybe my understanding of all these things is plain wrong... but I can't see any other way to eliminate the DCs. |
[QUOTE=LaurV;546809]
A proof of work is easy to implement, but that only shows that you did the work, not that the work is correct. If an unlucky bit flips, you are screwed, and yet the proof of work is valid. On the other hand, any residue-like information you got after any kind of test you do, which can be verified in x% of the time of a test as being correct, can also be [U]produced[/U] in about the same x% of the time of a test, without doing the test, just following the same steps that the "verifier" would do. If you can do that, then you just discovered a new testing method that only takes x% of the LL/PRP tests, and well, who will need LL/PRP anymore?[/QUOTE] The PRP-Proof discussed here is not a proof of work, it is a proof of the corectness of the PRP residue (at iteration topK). While it can be verified by anybody fast, in can not be generated faster than doing the PRP test. Your assertion above that the verifier could produce the proof in the same amount of time as the verification by "following the same steps" simply does not hold. It is similar in a way to finding a factor (hard) and verifying the factor (easy). Or in general, to what is called one-way functions where there is difficutly asymetry between the two directions. |
SHA-3 hash
By popular demand (@retina) I changed the hash in the "reference implementation" in gpuowl to SHA-3.
The hash is run with an output of 256bits. This full output (256bits) is chained (when generating the sequence of hashes such as h1=hash(h0, M1) etc) but when computing the exponents a truncated 64-bit value of the hash is used (please see the gpuowl source code if interested in details). I hope this raises no concerns. |
Security of the proof
The PRP-Proof is based on Pietrzak's VDF article. The math in Pietrzak's article is solid, but I think our setup departs from Pietrzak's assumptions in one aspect. I'd like to explicitly disclose this here before it's "exposed as a fatal flaw of the proof".
In my understanding (math people please correct me), the proof becomes weak (can be faked) if the author knows a factorization of the modulo (the mersenne candidate in our case). But the case we want to be protected against through the proof mecanism is the situation of a mersenne prime acquiring a false proof showing it being composite. This is not possible because, for a prime, the proof cannot be faked (because no factorization is available). [maybe] the theoretical possibility remains that an agent may produce a fake proof of a composite candidate being prime based on a factor known to the agent. While this possibility is slightly annoying, it's not dangerous because such a case would be exposed immediatelly because of the interest in the new presumed prime. |
[QUOTE=preda;546862]By popular demand (@retina) ...[/QUOTE]OMG I'm popular?
:tu:[QUOTE=preda;546863][maybe] the theoretical possibility remains that an agent may produce a fake proof of a composite candidate being prime based on a factor known to the agent. While this possibility is slightly annoying, it's not dangerous because such a case would be exposed immediatelly because of the interest in the new presumed prime.[/QUOTE]If you know a factor that no one has then ... People are weird (yes they are) and sooner of later someone will try it. For some brief fame or something. |
[QUOTE=retina;546865]If you know a factor that no one has then ...
People are weird (yes they are) and sooner of later someone will try it. For some brief fame or something.[/QUOTE] Yes, but it's not a problem. First, it may be difficult to build such a proof even knowing a factorization of the candidate (it may well be more difficult than just running the PRP the honest way). I say if somebody does do it, they deserve kudos for the non-trivial analysis effort and for showing that it can be done. And for the project (GIMPS) it's not a big deal to have somebody claim a prime that wasn't. |
[QUOTE=preda;546870]Yes, but it's not a problem. First, it may be difficult to build such a proof even knowing a factorization of the candidate (it may well be more difficult than just running the PRP the honest way). I say if somebody does do it, they deserve kudos for the non-trivial analysis effort and for showing that it can be done. And for the project (GIMPS) it's not a big deal to have somebody claim a prime that wasn't.[/QUOTE]
With a knowledge of a new factor (after the published proof) actually the verifier will have an advantage to catch cheaters even in a faster way: just check ((g^(2^L) mod N) mod q), where the g^(2^L) mod N is a single residue from the proof. If it isn't g^(2^L) mod q then gotcha, the proof is broken. And for this you need only one full residue and the cheater has only 1/q chance to fake it whatever he does. |
[QUOTE=preda;546863]
In my understanding (math people please correct me), the proof becomes weak (can be faked) if the author knows a factorization of the modulo (the mersenne candidate in our case). [/QUOTE] Would that make it possible for someone who has found a factor to report a bogus proof it's composite, thus appearing to have done a lot more work than they really did? It's not a big issue because the number would really be composite but might be used to move up the work done table. (I've obviously spent too much time looking for ways to break things when I was working on computer security.) Chris |
PRP-Proof file spec moved
The proof file spec is now maintained here:
[url]https://github.com/preda/gpuowl/wiki/PRProof-File-Spec[/url] Feel free to provide feedback, ask for clarifications, etc. |
I still didn't get how this works.
1. I am reserving Mx for PRP. 2. I generate x random residues in a file, without doing any work, just garbage data. 3. I apply your calculus on my garbage data and generate the verification file. 4. I send it to PrimeNet and get the LL/PRP credit in just 5 minutes (including the "trusted" particle attached to it, because it is verified, you know?). Which of this step would be impossible, and why? Of course the first is not necessary, I don't want to create suspicion by reporting results at a very short time after reservation, so I will just report results, without reservation. Of course, I understand the part that there is some relation between some residues in that list, and they are not totally random, but this is where we are stuck. Say the residue at iteration h has some relation with residue at iteration k. Then either there is a small difference between h and k (to allow the verification process to be fast, without a lot of squarings) and in this case I can also produce this relation fast, starting with a random h (and squaring, or whatever), or either there is a large distance between h and k, to be difficult for me to fake it, but then in such case you have to spend a huge amount of time to verify it (practically repeating a part of the initial test, and yet, being unsure the iteration h was true, except when it was the first). What we are currently doing today, h is 1 and k is x-1, and the verification process is called "Double Checking", except we just use residues directly, and not hashes of them. |
[QUOTE=preda;546996]Feel free to provide feedback, ask for clarifications, etc.[/QUOTE]
What about having an EOF (0x1A) after the \n like in the PNG header? That way [C]cat[/C] etc. will exit after printing the header. |
| All times are UTC. The time now is 17:31. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.