mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Math

Reply
 
Thread Tools
Old 2019-08-02, 09:20   #1
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

26568 Posts
Default VDF (Verifiable Delay Function) and PRP

On HackerNews today I saw this story talking about VDF, "Verifiable Delay Function", which in one variant is: f(x, N, M) = x^(2^N) mod M

https://news.ycombinator.com/item?id=20587753
https://aws.amazon.com/blogs/startup...ge-blockchain/
https://blog.trailofbits.com/2018/10...unctions-vdfs/

Apparently they are planning on developing an ASIC to compute that. It would be nice if that ASIC could be re-purposed to wider modular squarings.

Another interesting aspect is the claim that the VDF result is verifiable in time O(log(N)), which might give an idea for a "proof of work" or verification of a PRP result. I don't understand yet how the VDF is verifiable though (in log time), maybe somebody can explain.
preda is offline   Reply With Quote
Old 2019-08-02, 21:05   #2
ewmayer
2ω=0
 
ewmayer's Avatar
 
Sep 2002
República de Califo

22·2,939 Posts
Default

Verifiable correct modular exponentiation, asymptotically fast results verification, and immunity to parallel speedups ... that sounds a lot like the problem described in the 20-year-old MIT LCS35 Time Capsule Crypto-Puzzle solved thread.
ewmayer is offline   Reply With Quote
Old 2019-08-05, 12:44   #3
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

5AE16 Posts
Default PRP verification

A list of VDF articles: https://vdfresearch.org/

I found of interest in particular
Pietrzak: https://eprint.iacr.org/2018/627.pdf
Wesolowski: https://eprint.iacr.org/2018/623.pdf

which present two different ways of constructing VDF proofs with different trade-offs.

Pietrzak shows a proof that is fast to produce but log(exp)*residues in size; while Wesolowski shows a proof that is 1-residue in size, but costlier to produce.

For example for a 90M exponent, Pietrzak's proof would be about 256MB in size vs. Wesolowski's 10MB. OTOH if I understand correctly Pietrzak's proof can be produced with negligeble computation cost (overhead relative to the PRP itself).

Anyway, it seems this allows to get rid of the PRP double-check completely, at the cost of the large transfer (256MB per exponent) to a verifier. Such a posibility seems a huge opportunity to me. (of course I would have liked the proof to be smaller than that, but even at this size it seems like a huge gain vs. a double-check). In other words, the double-check is replaced by the verification which is log(doubleCheck) in effort.

Maybe somebody more math-inclined can do a confirmation of how this applies to our particular context.

Last fiddled with by preda on 2019-08-05 at 12:46
preda is offline   Reply With Quote
Old 2019-08-07, 16:42   #4
GP2
 
GP2's Avatar
 
Sep 2003

2·5·7·37 Posts
Default

Quote:
Originally Posted by preda View Post
For example for a 90M exponent, Pietrzak's proof would be about 256MB in size vs. Wesolowski's 10MB. OTOH if I understand correctly Pietrzak's proof can be produced with negligeble computation cost (overhead relative to the PRP itself).

Anyway, it seems this allows to get rid of the PRP double-check completely, at the cost of the large transfer (256MB per exponent) to a verifier. Such a posibility seems a huge opportunity to me. (of course I would have liked the proof to be smaller than that, but even at this size it seems like a huge gain vs. a double-check). In other words, the double-check is replaced by the verification which is log(doubleCheck) in effort.
This is very feasible.

As a general rule, the AWS cloud only charges for outward data transfers. If you upload data from the Internet into S3 storage in the cloud, it costs absolutely nothing. I imagine that Google and Microsoft (Azure) would have similar policies.

This makes sense, because if you upload something, you will be most likely be storing it for a while. So they make the upload bandwidth free and hope to make money over time with data storage fees. Downloading data out of the cloud to the Internet is also not free.

So if a future version of Primenet was hosted in the cloud, your large-transfer verification would be entirely practical, and costless.

The only question is, what to do with the 10MB or 256 MB file after the file has been processed and checked. If it's discarded, then there's no problem. If you want to archive it for the long term just in case, then the cheapest storage costs would be S3 Glacier Deep Archive, at $1 per TB per month. Normally you would only want to re-access such a file on very rare occasions.

Last fiddled with by GP2 on 2019-08-07 at 16:43
GP2 is offline   Reply With Quote
Old 2019-08-07, 18:41   #5
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

27×33 Posts
Default

Discussing the feasibility of a future primenet in the cloud seems like jumping 10 steps ahead.

The main important issue is would this even work for Mersenne PRPs ?
I do not claim to understand all the math and the proofs, but Pietrzak requires that N=p*q where (p-1)/2 and (q-1)/2 are also primes, which is not the case at all for the general composite Mersenne number 2p-1.

They generally have more than 2 factors which are not safe primes.

Quote:
2.3 On using safe primes
Another difference to the setting of [RSW96] is that we assume that
N = p · q is the product of random safe primes, whereas [RSW96] just
assume random primes. We do this to make sure that QRN (and thus
also QR+N) contains no sub-group of small order, this property is required
to prove statistical soundness.

Last fiddled with by ATH on 2019-08-07 at 18:43
ATH is online now   Reply With Quote
Old 2019-08-07, 20:04   #6
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

22×7×59 Posts
Default

Quote:
Originally Posted by ATH View Post
Discussing the feasibility of a future primenet in the cloud seems like jumping 10 steps ahead.

The main important issue is would this even work for Mersenne PRPs ?
I think that it was already discussed, you can check the prp residue with high confidence if you find a factor of mp (obviously in real life after the test):

Let res=2^mp mod mp
if q divides mp then res=2^(mp mod (q-1)) mod q (and computing this is easy), and it is pretty hard to fake it without the knowledge of q.
R. Gerbicz is online now   Reply With Quote
Old 2019-08-07, 21:36   #7
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

D8016 Posts
Default

Quote:
Originally Posted by R. Gerbicz View Post
I think that it was already discussed, you can check the prp residue with high confidence if you find a factor of mp (obviously in real life after the test):

Let res=2^mp mod mp
if q divides mp then res=2^(mp mod (q-1)) mod q (and computing this is easy), and it is pretty hard to fake it without the knowledge of q.
Ok that is nice, but it is not a huge game changer? We are not going to find new factors of a big percentage of Mersenne numbers after the 1st PRP test?

I do not think this is what Preda had in mind, he wanted to eliminate PRP double checks completely.

Last fiddled with by ATH on 2019-08-07 at 21:38
ATH is online now   Reply With Quote
Old 2019-08-09, 23:27   #8
preda
 
preda's Avatar
 
"Mihai Preda"
Apr 2015

26568 Posts
Default PRP verification with Pietrzak's protocol

I'll try to describe briefly Pietrzak's verification applied to PRP (as I understand it).

For m=2^p - 1
PRP(p) = 3^(m-1) mod m, or equivalently PRP(p)*9 = 3^(2^p) mod m.

Let's fix the modulo m, and let's define
f(x, T) := x^(2^T) mod m.

for x=3, we compute y = f(x, p). How to prove to a verifier that the result is correct?

We call the two actors involved "the author" who does the hard work of computing the PRP and producing the proof, and "the verifier" who receives the proof from the author and checks it. The goal is to convince the verifier that the result of the PRP is genuine and correct even when the verifier does not trust the author.

If the author would send the verifier only the PRP result (y), the verifier could re-do the whole PRP computation *again* and thus validate the result (i.e. the double-check we do currently), but this would be very costly to the verifier, and wasteful because of the double computation.

To make the verification easier, the author also sends the "middle" point u=f(x, p/2), which the author computed anyway on the way to y. (let's assume for a moment that "p" is even (although it isn't), so we can halve it without additional trouble).

The verifier can now verify
f(x, p/2)==u and f(u, p/2)==y,
and this is equivalent to verifying f(x, p, m)==y; but still no reduction in work for the verifier.

Because the two verification "branches" above share the same exponent p/2, they can be combined and verified together. Choosing a randon factor "r1" (let's say "r1" is a 64-bit number), the verifier computes the new seed x1= x^r1 * u, and verifies that:
f(x1, p/2)==y1, where y1=u^r1 * y.

So we transformed the verification f(x,p)=y into f(x1,p/2)=y1, halving the work. This "halving" is possible because the author sent to the verifier the value "u" (the middle value between x and y) -- so "u" is part of the proof.

But if we have a way to halve the work of the verifier, why stop after only one halving step? let's look at the next halving:
the author makes available "u1" the middle point between x1 and y1, u1=f(x1, p/4).
r2 is another random 64-bit number chosen by the verifier,
the verifier computes:
x2=x1^r2 * u1
y2=u1^r2 * y1
and verifies f(x2, p/4)=y2
with only a quarter of the work of the author.

And we can keep halving a bit more, making the work of the verifier log(work of the author).
At some point, after k halvings, when p/(2^k) is small enough, the verifier checks directly
f(x_k, p/(2^k))==y_k and this is the outcome of the verification.

One may observe that we ask the author to publish as part of the proof the values u, u1, u2 etc, but those (excluding "u") depend on the random factors r1, r2, which in the setup above are chosen by the verifier. We solve this by having the author produce the random factors r1, r2 -- not by choosing them (because we don't trust the author) -- but instead by having these factors determined by applying a hash on the result, like this:
r1=hash(y), r2=hash(y,y1) etc. (apparently this trick is called the Fiat-Shamir heuristic)

[to be continued]
preda is offline   Reply With Quote
Old 2019-08-11, 23:43   #9
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

165210 Posts
Default

It is also related to error checking (why not, because you are doing exactly the same thing, just for a subproblem).
In the recent 2 days I thought that it'd possible to beat the 2*sqrt(block) mulmods extra cost in the checking. Turned out it was wrong, but here (shortly)

Let y(0)=b and y(i+1)=y(i)^(2^L) mod N

Here when my check computes this there is a reduncancy, because you could compute
y(0)*y(1)*...*y(t) in less then t multiplication exploiting the y(i+1)=y(i)^(2^L) mod N, it isn't that hard, the
bonus is that you could even get the power also, bpow=b^(2^L*(t+1)) with a modular division (for Mersenne numbers it is cheap)
, and a "strong" check, unfortunately here you need two computations, means that you could pass a strong check and still you have an error
in the computation. So this is just not working in exactly this way.
R. Gerbicz is online now   Reply With Quote
Old 2019-08-17, 16:04   #10
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

22·7·59 Posts
Default

In fact these methods works also for the standard LL test, so there is an incredible fast "proof" to convince ourselves in just only O(sqrt(p)) multiplication, if we limit the exponents in the proof say in 64 bits.

Just let
H(n)=(2+sqrt(3))^(2^n) mod Z[sqrt(3),mp].
LL says that mp is prime iff H(p-2)=0 (where here we omit the sqrt(3) stuff).

Ofcourse we had to use this ugly sqrt(3) part also to enable the various fast checks for the powers in H(), if we would keep the conjugate part then
we lose the power.

The error checks goes in the usual way, but in the above ring, not the friendly Z.
This is somewhat interesting, I mean having a fast "almost" true primality check. One drawback is the high storage of the ~sqrt(p) residues. (there is a tradeoff here, you can lower the storage by increasing the computation time).
R. Gerbicz is online now   Reply With Quote
Old 2019-08-18, 15:43   #11
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

176338 Posts
Default

Quote:
Originally Posted by R. Gerbicz View Post
In fact these methods works also for the standard LL test, so there is an incredible fast "proof" to convince ourselves in just only O(sqrt(p)) multiplication, if we limit the exponents in the proof say in 64 bits.

Just let
H(n)=(2+sqrt(3))^(2^n) mod Z[sqrt(3),mp].
LL says that mp is prime iff H(p-2)=0 (where here we omit the sqrt(3) stuff).

Of course we had to use this ugly sqrt(3) part also to enable the various fast checks for the powers in H(), if we would keep the conjugate part then
we lose the power.

The error checks goes in the usual way, but in the above ring, not the friendly Z.
This is somewhat interesting, I mean having a fast "almost" true primality check. One drawback is the high storage of the ~sqrt(p) residues. (there is a tradeoff here, you can lower the storage by increasing the computation time).
All of mersenne.org's workspace is p<109, ~30 bits, and likely to take a century to cover.

sqrt(p) residues or ln(p)? (or log2(p)?)
At p=100M, CUDALucas save files are ~12.MB; x sqrt(p) = 12.2 MB x 10000 = 122. GB; 12.2 x ln(p) = 12.2 x ~19 = 224 MB.
At p=332M, CUDALucas save files are 40.5MB; x sqrt(p)= 40.5 x 18221 = 738GB; 40.5 x ln(p) = 40.5 x ~20 = 810 MB.
I have checkpoint file collections for single exponents up to ~10GB size.
Local storage is cheap (3TB US$50, 6TB $120).

Does this verification only confirm the LL run was completed, or does it check its correctness?

Last fiddled with by kriesel on 2019-08-18 at 16:03
kriesel is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
phi function rula Homework Help 3 2017-01-18 01:41
delay in crediting? ixfd64 PrimeNet 7 2008-10-20 20:45
Why delay between posts? JHagerson Forum Feedback 1 2006-05-13 21:30
Minimum delay between server connections vaughan ElevenSmooth 5 2005-09-08 17:17
Stats delay ltd Prime Sierpinski Project 10 2005-08-08 13:38

All times are UTC. The time now is 21:37.


Fri Sep 29 21:37:34 UTC 2023 up 16 days, 19:19, 0 users, load averages: 0.88, 0.93, 1.09

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.

≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔