![]() |
[QUOTE=preda;496626][/QUOTE]
Having to re-do the P-1 first-stage for double checking is a hit. (Note that the PRP-1 could be double-checked without redoing the P-1 by saving the Base bits, and starting the double check from there. Even if Base is "wrong" or "random" (e.g. there was an error, undetected, in the P-1-first-stage), the PRP(Base) is still valid and can be double checked). |
[QUOTE=preda;496626]Your analysis disregards the fact that the PRP-1 *stops* as soon as it detects a factor. So in the lucky situation that a factor is found, the test takes [much] less time then one LL. This is guaranteed through the way the Ks are selected for testing (an overview of the "K selection algorithm" is here: [url]https://mersenneforum.org/showpost.php?p=495794&postcount=43[/url] and source is here: [url]https://github.com/preda/gpuowl/blob/master/kselect/kselect.cpp[/url] ) -- that algorithm explicitly favors early Ks that produce big cutoffs when a factor is found.
(PRP-1 does multiple GCDs as it moves along, not a single one at the end). I'll try to quantify the cost next, but it's more involved.[/QUOTE] OK, in my analysis, even considering the early stop of the PRP-1 when a factor is found, based on the probabilities coming from the B1/B2 bounds, PRP-1 is not better than P-1 followed by PRP, with double checks. This is mainly because of the 1.6% supplementary cost for computing the base at the beginning of the double-check. (If that cost is ignored, PRP-1 becomes a bit faster than P-1 + PRP). OTOH the difference between the two is tiny, less than 1% of a single round of PRP. In my analysis I took the cost of one MUL (general multiplication) as being 2x the cost of a Squring. (because on the GPU, in my implementation, the MUL is that slow). (I approximated the cost of one GCD with 2000 Squarings). I still think doing some PRP-1 may be worth, at least to explore the empiric performance (i.e. what factors it finds, and frequencies). I also don't know the probabilities for finding factors, so I use lower bounds coming from P-1(B1, B2). I don't have experience with implementing second-stage of P-1. Could somebody enlighten me WRT the cost of second-stage, as a function of B2? |
[QUOTE=preda;496670]I don't have experience with implementing second-stage of P-1. Could somebody enlighten me WRT the cost of second-stage, as a function of B2?[/QUOTE]
Excluding some setup costs, it is one subtraction and one multiplication per pair of primes. Sometimes a given prime cannot be paired; in that case, one subtraction/multiplication for a single prime. (No idea what % of primes can be paired). This of for all primes B1 < p <= B2. |
[QUOTE=axn;496675]Excluding some setup costs, it is one subtraction and one multiplication per pair of primes. Sometimes a given prime cannot be paired; in that case, one subtraction/multiplication for a single prime. (No idea what % of primes can be paired). This of for all primes B1 < p <= B2.[/QUOTE]
OK, thanks! and how expensive is the initial setup? is the setup so cheap that it can be disregarded? is the setup cost growing with B2? |
[QUOTE=preda;496676]OK, thanks! and how expensive is the initial setup? is the setup so cheap that it can be disregarded? is the setup cost growing with B2?[/QUOTE]
Yes it is pretty cheap, and no it is a constant cost. The setup involves choosing a constant D and precomputing powers x^b, for all b relatively prime to D. Then it is a baby-step, giant-step type iteration through the primes from B1 to B2, where the giant step moves up by D and the baby steps accumulate the precomputed small powers. Crandall and Pomerance's book provides more details. [QUOTE=axn;496675]Excluding some setup costs, it is one subtraction and one multiplication per pair of primes. Sometimes a given prime cannot be paired; in that case, one subtraction/multiplication for a single prime. (No idea what % of primes can be paired). This of for all primes B1 < p <= B2.[/QUOTE] In my ECM implementation (P-1 will be similar), about 20% are typically paired. (It's probably possible to do better with more effort.) |
OK. So comparing the number of multiplication,
in PRP-1 one multiplication in average covers about 2.5 "useful" primes and 2 more "large" primes. So from this aspect PRP-1 looks more efficient than second-stage P-1. OTOH, PRP-1 also does interleaving PRP steps that are not needed (wasted) if a factor is found. It's true that if a factor is found, chances are it's found early in the PRP-1 so there's not so much wastage. Other small advantages of PRP-1: low memory usage, and much simpler to implement (compared to second-stage P-1). Also the iteration values that are accumulated towards the GCD are error-checked with the GEC (but not the multiplication itself). Thus a bit better error-resistance. [QUOTE=bsquared;496683]Yes it is pretty cheap, and no it is a constant cost. The setup involves choosing a constant D and precomputing powers x^b, for all b relatively prime to D. Then it is a baby-step, giant-step type iteration through the primes from B1 to B2, where the giant step moves up by D and the baby steps accumulate the precomputed small powers. Crandall and Pomerance's book provides more details. In my ECM implementation (P-1 will be similar), about 20% are typically paired. (It's probably possible to do better with more effort.)[/QUOTE] |
factors of 2^k - 1
I realize I have only a small understanding of the prime factors of 2^k - 1:
1. (a sufficient condition): if z(p) | k, then p | (2^k - 1) 2. in general, 2^k - 1 has "many factors, thus mostly small factors". But the condition 1 above is not necessary, meaning there are p such that p|(2^k - 1), yet z(p) does-not-divide k. For example, I wonder what is the factor coverage of 2^k - 1, for k=0..N. Let's say, given N, what is the smallest prime p that is not a factor of any 2^k - 1, for k in [0, N]. (maybe a probabilistic argument would help here). Another way to put it, what's the smallest p that does not divide product(2^k - 1, for k =0..N) And how does such "smallest p" grow with N? PS: sure, I mean "odd prime" p. (excluding 2, which is never a factor of 2^k - 1) |
[QUOTE=preda;496769]I realize I have only a small understanding of the prime factors of 2^k-1[/QUOTE]
is k composite, or prime? |
[QUOTE=science_man_88;496770]is k composite, or prime?[/QUOTE]
Either. Take every value in the interval [0, N]. |
[QUOTE=preda;496769]I realize I have only a small understanding of the prime factors of 2^k - 1:
1. (a sufficient condition): if z(p) | k, then p | (2^k - 1) 2. in general, 2^k - 1 has "many factors, thus mostly small factors". But the condition 1 above is not necessary, meaning there are p such that p|(2^k - 1), yet z(p) does-not-divide k.[/QUOTE] What is z(p)? Also, can you give an example of p|(2^k-1) and z(p) does not divide k ? EDIT:- Ok, from up thread [quote]For any prime "p", let's call z(p) the multiplicative order of 2 modulo p[/quote] |
for p == 1583 423452 213582 178911 805893 942695 192421,
I imagine z(p) does not divide 1620, yet p divides 2^1620 - 1 [QUOTE=preda;495478]For fun here's the factorization of 2^1620 - 1: (done with the nice applet [url]https://www.alpertron.com.ar/ECM.HTM[/url] ) Tiny factors: 3^5 × 5^2 × 7 × 11 × 13 × 19 × 31 × 37 × 41 × 61 × 73 × 109 × 151 × 163 × 181 × 271 × 331 × 541 × 631 × 811 Average factors, < 3M (e.g. could be covered in first-stage P-1) 1321 × 1621 × 2593 × 6481 × 9721 × 15121 × 23311 × 30241 × 49681 × 54001 × 71119 × 87211 × 135433 × 246241 × 262657 × 279073 × 348031 × 537841 Large factors [3M, 4B] (could be covered in second-stage P-1) 3 618757 × 6 876901 × 18 837001 × 29 247661 × 74 967931 × 97 685839 × 106 979941 × 168 410989 × 272 010961 × 1511 474581 × 2437 880491 × 2458 695061 × 3934 029061 Larger factors, > 4B: [CODE] 4977 454861 448217 524891 9 893662 806061 10 360573 664851 49 971617 830801 165 041853 060421 385 838642 647891 1969 543281 137041 11096 527935 003481 166242 935471 754241 [/CODE] Huge factors: 1583 423452 213582 178911 805893 942695 192421 (40 digits) 4343 952637 722706 853771 280086 533392 805261 (40 digits) 17 645665 556213 400107 370602 081155 737281 406841 (44 digits) PS: and when you think that the expected number of factors for a number of that magnitude, given by log(log(x)), is 7.[/QUOTE] |
| All times are UTC. The time now is 15:43. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.