mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Math (https://www.mersenneforum.org/forumdisplay.php?f=8)
-   -   VDF (Verifiable Delay Function) and PRP (https://www.mersenneforum.org/showthread.php?t=24654)

preda 2020-05-28 03:30

Proof and Primenet
 
The goal of using the PRP-proof with primenet is to eliminate the double-checks. (I estimate right now 10%-20% of the PRP/LL computing power is used for DCs)

The main problem is the size of the proof (e.g. 120MB for a 100M exponent) which makes the internet transfer significant. A secondary question is who contributes the compute-power for proof verification. I'll try to enumerate some possibilities.

Setup:
User X gets assigned a PRP task. Using software A, he starts the PRP test with proof generation, indicating the desired proof power. The PRP test proceeds and saves a large number of residues (needed for proof generation) as it goes along. Afther the PRP test finishes, the proof is generated (by program A) using this PRP data. After the proof is generated, it is verified by A (to make sure that the generation went without issues), after which all the temporary residues that were needed for proof generation are deleted. At this point X has:
- the usual one-line JSON result of the PRP test, indicating (among others): composite/probable-prime and the res64
- the proof, a 120MB file.

1. Local proof verification
User X can now do proof verification. For this he uses software B and C, that independently read the proof file and validate it. After validation, B and C sign their results and send them to primenet. (the results being again small JSON files)

2. Primenet-centered verification
User X uploads the proof (120MB) to primenet. Primenet queues the proof file for verification. A new task type exists, "proof verification". User Y is assigned the "proof verification" task, he downloads the proof (120MB), verifies it and uploads the result to primenet. At this point primenet has both the initial result from X, and the independent verification from Y. Optionally, primenet can even triple-verify (by user Z) the proof, after which the proof file is deleted from the server to save space or archived.

3. Verifier-node verification
User X uploads the proof to a special Verifier server. (the Verifier may run in AWS, and thus have free inbound bandwidth). The Verifier server verifies the proof, twice, using two independent methods. The Verifier signs the results, publishes them to primenet, and removes or archives the proof file.

Advantages/disadvantages of these variants to be discussed next.

retina 2020-05-28 04:07

I would be more comfortable with a more commonly used hash. Like SHA3. That would make it more accessible. All hashes are fast anyway so saying that blake2 is fast isn't really an advantage IMO.

Also, what happens if/when AWS decides to change the policies about inbound vs outbound traffic? I actually think that a 120MB file is a significant hurdle. And the intermediate local storage of 6GB isn't something to be taken lightly either.

preda 2020-05-28 06:07

[QUOTE=retina;546634]I would be more comfortable with a more commonly used hash. Like SHA3. That would make it more accessible. All hashes are fast anyway so saying that blake2 is fast isn't really an advantage IMO.
[/QUOTE]

Blake2 is also a well-known and commonly used hash. (Blake, the predecessor of Blake2, was a participant in the NIST hash competition (where SHA-3 was the winner)). It is used in various software products and protocols (wikipedia has details). More notably, it is used as the backbone for the recent Blake3 (the Blake2s variant though).

Blake2 has an advantage over SHA-3 in my eyes: it's already implemented in gpuowl. The implementation was easy. It's smaller that SHA-3. I'm open to switching the hash, but I'd need a stronger argement for that.

Just to restate, Blake2 is not a niche hash, it's one of the major mainstream crypto hashes. (it may even be more used in products than SHA-3)

preda 2020-05-28 06:13

[QUOTE=retina;546634]I actually think that a 120MB file is a significant hurdle. And the intermediate local storage of 6GB isn't something to be taken lightly either.[/QUOTE]

I agree about the size of the proof, and the size of the temporary disk space -- they are both [much] larger than I would have liked. A proof does bring major advantages though, so it's a cost we may be willing to pay (or not). That's why I put up for discussion how to integrate it with primenet. My personal favorite would be the "local verification" which bypasses the upload/download issue, but not without tradeoffs.

preda 2020-05-28 06:59

[QUOTE=retina;546634]I would be more comfortable with a more commonly used hash. Like SHA3. That would make it more accessible. All hashes are fast anyway so saying that blake2 is fast isn't really an advantage IMO.[/QUOTE]

What library would you suggest that implements SHA3-256 and is widely available on Linux and Windows? Or a small C source code that implements it that could be easily merged in the project.

BTW, reading a bit more, the comparison Blake2 vs. SHA-3 looks roughly like this: both are of the same strength, and Blake2b is 3 times as fast as the fastest SHA-3 in software.

retina 2020-05-28 07:22

[QUOTE=preda;546640]What library would you suggest that implements SHA3-256 and is widely available on Linux and Windows? Or a small C source code that implements it that could be easily merged in the project.

BTW, reading a bit more, the comparison Blake2 vs. SHA-3 looks roughly like this: both are of the same strength, and Blake2b is 3 times as fast as the fastest SHA-3 in software.[/QUOTE]I don't have a library suggestion. But since it is [i]the[/i] hash function as chosen by the gods at NIST I have just assumed it would be widely available. Sorry if that isn't so. There are certainly a bunch of websites offering code, but I don't know which is "the best".

How much reliance upon speed is required of the hash? I would hope that even at 3X speed it would still be only a minor blip in the overall timing.

preda 2020-05-28 07:41

[QUOTE=retina;546642]
How much reliance upon speed is required of the hash? I would hope that even at 3X speed it would still be only a minor blip in the overall timing.[/QUOTE]

For one proof (either construction or verification), the amount of data hashed is about the same as the size of the proof (what is hashed is the residues B and all the middles). So let's say 120MB. Even a slow hash would not be a problem, agreed.

preda 2020-05-28 08:22

SHA-3 vs. Blake2 hash preference
 
Anybody else has a preference about the hash function to use?

Blake2 is already implemented in gpuowl
For SHA-3 I found source code that I could incorporate here: [url]https://www.fossil-scm.org/home/file?name=src/sha3.c&ci=tip[/url]

M344587487 2020-05-28 09:56

OpenSSL is my go-to library for crypto and it has SHA3, for future reference mbedtls is a nice library of self-contained crypto implementations but it doesn't appear to have SHA3. If speed is a primary concern SHA256 is hardware accelerated on modern x86 and lets be honest SHA256 isn't going to be broken in a meaningful way anytime soon. All three suggestions would do the job fine so IMO stick with what you know.

chris2be8 2020-05-28 15:47

[QUOTE=preda;546638]My personal favorite would be the "local verification" which bypasses the upload/download issue, but not without tradeoffs.[/QUOTE]

The obvious snag is that it would be *possible* for someone to fake the verification. Remember El Dorko. Would he have any qualms about faking results.

Chris

kriesel 2020-05-28 19:27

[QUOTE=chris2be8;546661]The obvious snag is that it would be *possible* for someone to fake the verification. Remember El Dorko. Would he have any qualms about faking results.

Chris[/QUOTE]
local verification plus El Dorko and his ilk
means
nonlocal verification also, or PRP DC independently


All times are UTC. The time now is 17:31.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.