20190802, 09:20  #1 
"Mihai Preda"
Apr 2015
3·5·7·11 Posts 
VDF (Verifiable Delay Function) and PRP
On HackerNews today I saw this story talking about VDF, "Verifiable Delay Function", which in one variant is: f(x, N, M) = x^(2^N) mod M
https://news.ycombinator.com/item?id=20587753 https://aws.amazon.com/blogs/startup...geblockchain/ https://blog.trailofbits.com/2018/10...unctionsvdfs/ Apparently they are planning on developing an ASIC to compute that. It would be nice if that ASIC could be repurposed to wider modular squarings. Another interesting aspect is the claim that the VDF result is verifiable in time O(log(N)), which might give an idea for a "proof of work" or verification of a PRP result. I don't understand yet how the VDF is verifiable though (in log time), maybe somebody can explain. 
20190802, 21:05  #2 
∂^{2}ω=0
Sep 2002
República de California
2^{2}×2,857 Posts 
Verifiable correct modular exponentiation, asymptotically fast results verification, and immunity to parallel speedups ... that sounds a lot like the problem described in the 20yearold MIT LCS35 Time Capsule CryptoPuzzle solved thread.

20190805, 12:44  #3 
"Mihai Preda"
Apr 2015
1155_{10} Posts 
PRP verification
A list of VDF articles: https://vdfresearch.org/
I found of interest in particular Pietrzak: https://eprint.iacr.org/2018/627.pdf Wesolowski: https://eprint.iacr.org/2018/623.pdf which present two different ways of constructing VDF proofs with different tradeoffs. Pietrzak shows a proof that is fast to produce but log(exp)*residues in size; while Wesolowski shows a proof that is 1residue in size, but costlier to produce. For example for a 90M exponent, Pietrzak's proof would be about 256MB in size vs. Wesolowski's 10MB. OTOH if I understand correctly Pietrzak's proof can be produced with negligeble computation cost (overhead relative to the PRP itself). Anyway, it seems this allows to get rid of the PRP doublecheck completely, at the cost of the large transfer (256MB per exponent) to a verifier. Such a posibility seems a huge opportunity to me. (of course I would have liked the proof to be smaller than that, but even at this size it seems like a huge gain vs. a doublecheck). In other words, the doublecheck is replaced by the verification which is log(doubleCheck) in effort. Maybe somebody more mathinclined can do a confirmation of how this applies to our particular context. Last fiddled with by preda on 20190805 at 12:46 
20190807, 16:42  #4  
Sep 2003
A13_{16} Posts 
Quote:
As a general rule, the AWS cloud only charges for outward data transfers. If you upload data from the Internet into S3 storage in the cloud, it costs absolutely nothing. I imagine that Google and Microsoft (Azure) would have similar policies. This makes sense, because if you upload something, you will be most likely be storing it for a while. So they make the upload bandwidth free and hope to make money over time with data storage fees. Downloading data out of the cloud to the Internet is also not free. So if a future version of Primenet was hosted in the cloud, your largetransfer verification would be entirely practical, and costless. The only question is, what to do with the 10MB or 256 MB file after the file has been processed and checked. If it's discarded, then there's no problem. If you want to archive it for the long term just in case, then the cheapest storage costs would be S3 Glacier Deep Archive, at $1 per TB per month. Normally you would only want to reaccess such a file on very rare occasions. Last fiddled with by GP2 on 20190807 at 16:43 

20190807, 18:41  #5  
Einyen
Dec 2003
Denmark
11×263 Posts 
Discussing the feasibility of a future primenet in the cloud seems like jumping 10 steps ahead.
The main important issue is would this even work for Mersenne PRPs ? I do not claim to understand all the math and the proofs, but Pietrzak requires that N=p*q where (p1)/2 and (q1)/2 are also primes, which is not the case at all for the general composite Mersenne number 2^{p}1. They generally have more than 2 factors which are not safe primes. Quote:
Last fiddled with by ATH on 20190807 at 18:43 

20190807, 20:04  #6  
"Robert Gerbicz"
Oct 2005
Hungary
2^{2}·3·5·23 Posts 
Quote:
Let res=2^mp mod mp if q divides mp then res=2^(mp mod (q1)) mod q (and computing this is easy), and it is pretty hard to fake it without the knowledge of q. 

20190807, 21:36  #7  
Einyen
Dec 2003
Denmark
11×263 Posts 
Quote:
I do not think this is what Preda had in mind, he wanted to eliminate PRP double checks completely. Last fiddled with by ATH on 20190807 at 21:38 

20190809, 23:27  #8 
"Mihai Preda"
Apr 2015
3×5×7×11 Posts 
PRP verification with Pietrzak's protocol
I'll try to describe briefly Pietrzak's verification applied to PRP (as I understand it).
For m=2^p  1 PRP(p) = 3^(m1) mod m, or equivalently PRP(p)*9 = 3^(2^p) mod m. Let's fix the modulo m, and let's define f(x, T) := x^(2^T) mod m. for x=3, we compute y = f(x, p). How to prove to a verifier that the result is correct? We call the two actors involved "the author" who does the hard work of computing the PRP and producing the proof, and "the verifier" who receives the proof from the author and checks it. The goal is to convince the verifier that the result of the PRP is genuine and correct even when the verifier does not trust the author. If the author would send the verifier only the PRP result (y), the verifier could redo the whole PRP computation *again* and thus validate the result (i.e. the doublecheck we do currently), but this would be very costly to the verifier, and wasteful because of the double computation. To make the verification easier, the author also sends the "middle" point u=f(x, p/2), which the author computed anyway on the way to y. (let's assume for a moment that "p" is even (although it isn't), so we can halve it without additional trouble). The verifier can now verify f(x, p/2)==u and f(u, p/2)==y, and this is equivalent to verifying f(x, p, m)==y; but still no reduction in work for the verifier. Because the two verification "branches" above share the same exponent p/2, they can be combined and verified together. Choosing a randon factor "r1" (let's say "r1" is a 64bit number), the verifier computes the new seed x1= x^r1 * u, and verifies that: f(x1, p/2)==y1, where y1=u^r1 * y. So we transformed the verification f(x,p)=y into f(x1,p/2)=y1, halving the work. This "halving" is possible because the author sent to the verifier the value "u" (the middle value between x and y)  so "u" is part of the proof. But if we have a way to halve the work of the verifier, why stop after only one halving step? let's look at the next halving: the author makes available "u1" the middle point between x1 and y1, u1=f(x1, p/4). r2 is another random 64bit number chosen by the verifier, the verifier computes: x2=x1^r2 * u1 y2=u1^r2 * y1 and verifies f(x2, p/4)=y2 with only a quarter of the work of the author. And we can keep halving a bit more, making the work of the verifier log(work of the author). At some point, after k halvings, when p/(2^k) is small enough, the verifier checks directly f(x_k, p/(2^k))==y_k and this is the outcome of the verification. One may observe that we ask the author to publish as part of the proof the values u, u1, u2 etc, but those (excluding "u") depend on the random factors r1, r2, which in the setup above are chosen by the verifier. We solve this by having the author produce the random factors r1, r2  not by choosing them (because we don't trust the author)  but instead by having these factors determined by applying a hash on the result, like this: r1=hash(y), r2=hash(y,y1) etc. (apparently this trick is called the FiatShamir heuristic) [to be continued] 
20190811, 23:43  #9 
"Robert Gerbicz"
Oct 2005
Hungary
564_{16} Posts 
It is also related to error checking (why not, because you are doing exactly the same thing, just for a subproblem).
In the recent 2 days I thought that it'd possible to beat the 2*sqrt(block) mulmods extra cost in the checking. Turned out it was wrong, but here (shortly) Let y(0)=b and y(i+1)=y(i)^(2^L) mod N Here when my check computes this there is a reduncancy, because you could compute y(0)*y(1)*...*y(t) in less then t multiplication exploiting the y(i+1)=y(i)^(2^L) mod N, it isn't that hard, the bonus is that you could even get the power also, bpow=b^(2^L*(t+1)) with a modular division (for Mersenne numbers it is cheap) , and a "strong" check, unfortunately here you need two computations, means that you could pass a strong check and still you have an error in the computation. So this is just not working in exactly this way. 
20190817, 16:04  #10 
"Robert Gerbicz"
Oct 2005
Hungary
1380_{10} Posts 
In fact these methods works also for the standard LL test, so there is an incredible fast "proof" to convince ourselves in just only O(sqrt(p)) multiplication, if we limit the exponents in the proof say in 64 bits.
Just let H(n)=(2+sqrt(3))^(2^n) mod Z[sqrt(3),mp]. LL says that mp is prime iff H(p2)=0 (where here we omit the sqrt(3) stuff). Ofcourse we had to use this ugly sqrt(3) part also to enable the various fast checks for the powers in H(), if we would keep the conjugate part then we lose the power. The error checks goes in the usual way, but in the above ring, not the friendly Z. This is somewhat interesting, I mean having a fast "almost" true primality check. One drawback is the high storage of the ~sqrt(p) residues. (there is a tradeoff here, you can lower the storage by increasing the computation time). 
20190818, 15:43  #11  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
4,243 Posts 
Quote:
sqrt(p) residues or ln(p)? (or log_{2}(p)?) At p=100M, CUDALucas save files are ~12.MB; x sqrt(p) = 12.2 MB x 10000 = 122. GB; 12.2 x ln(p) = 12.2 x ~19 = 224 MB. At p=332M, CUDALucas save files are 40.5MB; x sqrt(p)= 40.5 x 18221 = 738GB; 40.5 x ln(p) = 40.5 x ~20 = 810 MB. I have checkpoint file collections for single exponents up to ~10GB size. Local storage is cheap (3TB US$50, 6TB $120). Does this verification only confirm the LL run was completed, or does it check its correctness? Last fiddled with by kriesel on 20190818 at 16:03 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
phi function  rula  Homework Help  3  20170118 01:41 
delay in crediting?  ixfd64  PrimeNet  7  20081020 20:45 
Why delay between posts?  JHagerson  Forum Feedback  1  20060513 21:30 
Minimum delay between server connections  vaughan  ElevenSmooth  5  20050908 17:17 
Stats delay  ltd  Prime Sierpinski Project  10  20050808 13:38 