![]() |
Can we begin PRP using part of the results of P-1 stage 1?
As far as I know, in P-1 stage 1 we compute 3[SUP]2*p*E[/SUP] modulo M[SUB]p[/SUB]. However, in PRP we also compute 3[SUP]2^p[/SUP] modulo M[SUB]p[/SUB]. In both cases the base number is 3.
Could we begin PRP using part of the results in P-1 (say, 3[SUP]2^k[/SUP] where k is the highest bit level of 2*p*E) , to save some work (k iterations, about 1%), since most PRP first-time tests are assigned together with a prior P-1 ? I'm a freshman here and I don't know if the idea is feasible. |
A good idea! I won't go into the details of how one would do such an optimization.
Gpuowl already does this. It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit. |
[QUOTE=Prime95;579631]A good idea! I won't go into the details of how one would do such an optimization.
Gpuowl already does this. It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.[/QUOTE] Thank you! By the way, how much memory will it take? I'm sparing 11GB of memory out of 16GB to run Prime95, is that enough? Also I'm looking forward to see version 30.7 release. |
[URL="https://www.mersenneforum.org/showthread.php?t=25774"]This thread[/URL] has some related discussions, I believe.
|
And this [URL="https://www.mersenneforum.org/showthread.php?t=25799"]followup thread[/URL] as well
|
Estimating space, for each power needed, 64 bits/double / (~18 bits/ fft word) x ceil (p/8). Storage could be in the compressed form, ceil(p/8). For p up to ~10[SUP]9[/SUP], log2(10[SUP]9[/SUP]) ~30. 30 ceil (10[SUP]9[/SUP]/8) = 3.75GB. The representation as doubles is ~13.3GB, without misc. overhead. That's in addition to the P-1 stage 1 requirements. In low ram situations or to save at stop, resume later, it can be spilled to disk and retrieved later.
|
[QUOTE=Prime95;579631]The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.[/QUOTE]
Given Ben is doing 10x more first-time primality tests than anyone else, it might be worth doing just for Ben! (He will have 192 GB on his Amazon instances). BTW, during stage 2 of P-1 factoring of [M]M104212883[/M], which was previously trial factored to 2[SUP]76[/SUP], mprime used 303 GB RAM with B1=881,000, B2=52,281,000. [CODE][Worker #2 May 31 12:25] M104212883 stage 1 complete. 2543108 transforms. Time: 1935.398 sec. [Worker #2 May 31 12:25] Starting stage 1 GCD - please be patient. [Worker #2 May 31 12:26] Stage 1 GCD complete. Time: 45.404 sec. [Worker #2 May 31 12:26] Available memory is 376728MB. [Worker #2 May 31 12:26] D: 2730, relative primes: 7088, stage 2 primes: 3059717, pair%=95.64 [Worker #2 May 31 12:26] [B]Using 310498MB of memory[/B].[/CODE]mprime is saying it saves two primality tests if a factor is found, despite it is only very slightly over one. [CODE][Worker #2 May 31 11:53] Assuming no factors below 2^76 and [B]2 primality tests[/B] saved if a factor is found.[/CODE] |
[QUOTE=drkirkby;579754]mprime is saying it saves two primality tests if a factor is found, despite it is only very slightly over one.
[CODE][Worker #2 May 31 11:53] [COLOR=Red][B]Assuming[/B][/COLOR] no factors below 2^76 and [B]2 primality tests[/B] saved if a factor is found.[/CODE][/QUOTE]Key word there is highlighted in red, a holdover from the days of all LL, no PRP. Since most first testing is now PRP, perhaps a future release will adjust the assumption from 2 to ~1 for first tests, or at least PRP first tests, which would adjust the bounds selection somewhat, for greater overall efficiency of searching for new Mersenne primes. |
Wow!
By the way, shall we use slightly smaller TF bounds, such as 2^75, for exponents of ~115M? I think it's not a must given so much GPU computing power. |
[QUOTE=Prime95;579631]It is on my list of things to look into for prime95. The one problem is the optimization takes a lot of memory. Thus, for many users there may be little benefit.[/QUOTE]
While the optimal memory is 2^13 temps for current wavefront, we can make do with much lesser amounts and still gain a lot (compared to distinct P-1 + PRP). Illustrative numbers: Current wavefront is around 110m which uses 6M FFT. Given memory of 1GB, we can get 1024/48 = 21 temps. Let's assume we're targeting B1=1.2m which is about 1.73mbits of straight P-1. With 16 temps (largest power of 2 < 21), we can do the P-1 stage 1 with an additional ~290k multiplies. However, we aren't limited to power of two temps (though that is the easiest to conceptualize). We utilize all 21 temps (handling all 5 bit patterns and a few 6 bit patterns), this reduces effort to ~275k multiplies). Compare this with the optimal 2^13 temps which gets this done in ~132k multiplies. But also compare this with straight P-1 which gets this done in 1.73m multiplies!! Also, there is a downside to using a large number of temps. All the temps are part of your state! So if you need to write a checkpoint, you will need to write all of them to the disk. In the optimal case, that is ~110GB of IO per checkpoint. Obviously, this is not good. You could reduce it by accumulating the results and just writing that out - but that would mean 2*temp muls before each checkpoint. Here also, having fewer temps is helpful. In short, less memory is still a major gain, and might be a blessing in disguise. |
To piggyback on this idea, could we also go the other way? Could we use a saved value of B1 and also a small power of 3 times one of the last few iterations of the PRP test do a quick 2nd B1 test? Meaning, the following:
Given Mp and iteration m near the end of the PRP test (say on the last day), so m is somewhat close to but below p-1. On iteration m, we have found 3^2^m mod Mp. We then do a quick P-1 test on 3^[(2^m+k)*B1] mod Mp for 1<=k<=128. Is that feasible? Each 2^m+k should have about 18.4 distinct prime factors. Perhaps 18.4 prime factors for each value of k is enough to make it matter? Or would the GCD stage take too long? |
[QUOTE=JuanTutors;581068]Perhaps 18.4 prime factors for each value of k is enough to make it matter? Or would the GCD stage take too long?[/QUOTE]
Only 18.4? You need tens of thousands, if not millions. |
[QUOTE=Zhangrc;581096]Only 18.4? You need tens of thousands, if not millions.[/QUOTE]
I was making a suggestion for stage 2, where just one additional prime at a time is being tested (as far as I understand it). In fact a value of B1=10^6 has approximately 72000 prime factors, so 18.4 prime factors would not be bad. I think the problem might be more that the 2^m+k is usually quite large. |
I think there may be a worthwhile modification of my suggestion merging Zhangrc's suggestion and mine. After posting a [BADLY FLAWED] suggestion above is perhaps fixable?
First, a random number with 10^8 digits has on average [TEX]ln(10^8 ln 10) \approx 19.25[/TEX] distinct prime factors, which is why I was suggesting waiting until near the end to perform a third P-1 test. However, even a smaller integer with 10^6 digits has on average [TEX]ln(10^6 ln 10) \approx 14.65[/TEX] distinct prime factors. Also, please correct me if I am wrong, but is it not be the case that Mp is as likely to be a probable prime base 3 as it is to be a probable prime base 3^(2pE)? If this is the case, then my suggestion is that we perform the PRP test base 3^(2pE). Then when the test is 1% of the way done on iteration m ≈ Mp/100 ≈ 10^6, less than a week into the PRP test in many cases, we pause the PRP test and do a stage 2 (stage 3?) PRP test with 3^(2pE*(2^m+k)) for adequately chosen values of k. This would test approximately 14.65 distinct prime factors on each iteration. |
[QUOTE=LaurV;581254]Then you take the GCD step, and... what?[/QUOTE]
(from another thread) Please see my above post where I modified my suggestion. As you see above I suggested performing the PRP test with 3^(2pE) instead of with 3, and then stopping at the m = millionth or so iteration and taking GCDs of 3^(2pE(2^m+k)) for various well-chosen values of k. There are alternate variations of this, but to answer your question, which was, take the GCD and then ... what? If the GCD is not 1, we know that a factor exists from the test, and therefore Mp is not prime. Yes, its' true that we will not know the factor of Mp, because we will not know the factors of 2^m+k, but we will know that Mp is not prime, which has always been the goal of the PRP test. We could, in the future if motivated, factor 2^m+k for as long as it takes to find that factor of Mp, or not. |
[QUOTE=JuanTutors;581275](from another thread)
Please see my above post where I modified my suggestion. As you see above I suggested performing the PRP test with 3^(2pE) instead of with 3, and then stopping at the m = millionth or so iteration and taking GCDs of 3^(2pE(2^m+k)) for various well-chosen values of k. There are alternate variations of this, but to answer your question, which was, take the GCD and then ... what? If the GCD is not 1, we know that a factor exists from the test, and therefore Mp is not prime. Yes, its' true that we will not know the factor of Mp, because we will not know the factors of 2^m+k, but we will know that Mp is not prime, which has always been the goal of the PRP test. We could, in the future if motivated, factor 2^m+k for as long as it takes to find that factor of Mp, or not.[/QUOTE] Do we need to factor the 2^m+k? It's my understanding of GCD that if for any number, in this case, some power of 3, we do GCD of it (minus 1) and the Mersenne number, and we get result other than 1, the result will directly be a factor of Mersenne number, per definition of GCD. If the factor isn't the Mersenne number itself, we are good. It sounds like a reasonable feature if it works as promised. Using the PRP test as a free booster to speed up the ride is a great idea, IMO. If we start the test with a modular residue of 3^(2*p*E), where E is computed using B1=1,000,000, and we compute the [B][U]entire[/U][/B] PRP test with this value, what we get in the end is 3^(2*p*E*(2^p-1)), which can still give us a factor, and has the added benefit of determining PRP, because if Mp is prime, then the residue of 3^(k*(2^p-1)) is still 3 (mod 2^p-1). However, in the case of a composite, residues will no longer match for different runs, i.e. with different B1 sizes, but certification should still be possible because we are still doing the same test, but with a different starting value (others should correct me on that if I am wrong because I'm not sure). It could also be done the other way around. First, perform the PRP test. Then, use the residue in P-1, instead of 3. As I think about it, it is quite possible it will give no benefit, one way or the other. I think that the only way it could benefit the P-1 would be when the 2^m+k would have just the right factors that are higher than B1. What do you think? |
[QUOTE=Viliam Furik;581278]Do we need to factor the 2^m+k?[/QUOTE]
No, you are correct, we do not need to know the exact smoothness of the factor of Mp. That was my mistake. I will edit my post to acknowledge my deletion. [QUOTE=Viliam Furik;581278]It sounds like a reasonable feature if it works as promised.[/QUOTE] Agreed. [QUOTE=Viliam Furik;581278]Using the PRP test as a free booster to speed up the ride is a great idea, IMO. [/QUOTE] Hopefully it's possible. GPUto72 already implements a time saver using 3^(2pE). I was trying to find the theorem about the length of the distinct prime factors of a randomly chosen large integer, but I could not find it to estimate the probability of finding a factor. (For some reason I have in my mind the idea that for a random number, the length of its successive distinct prime factors grows on average by multiples of 2.) I don't know the exact details of the P-1 implementation in GIMPS, but if k can be taken as multiples of E, then it can be guaranteed that 2^n+k has no common factors with E. If not, then I guess other methods of choosing k may be more appropriate, such as choosing k to be a multiple of a smaller primorial. From what I understand, if we start the PRP test with base 3^(2pE) instead of base 3, then calculating this modified stage 2 (stage 3?) means taking the mth step, which is 3^(2pE(2^m)) and multiplying that by a large array 3^(2pEk) for with k replaced with 1K, 2K, 3K, ..., nK for K a primorial (and if K can be replaced with E, even better), and then getting the GCD. //EDIT: Looking more deeply at the P-1 Stage 2 algorithm, if letting K = E is possible, then storing 3^(2pE*E*2), 3^(2pE*E*4), 3^(2pE*E*6), 3^(2pE*E*8), ... allows us to then use the same bounds B1 and B2 to calculate 3^(2pE*(2^m+qE)) for all primes q such that B1<q<=B2, ensuring that each individual (2^m+qE) in (B1,B2] is neither divisible by a factor of E nor by q, making this a true stage 3. |
[QUOTE=JuanTutors;581294]I was trying to find the theorem about the length of the distinct prime factors of a randomly chosen large integer, but I could not find it to estimate the probability of finding a factor.[/QUOTE]
I don't know about such a theorem, but I do know there is an unproven conjecture called "abc conjecture", which says something about factors of sums. However, I don't it's useful in this case. |
[QUOTE=JuanTutors;581294]<snip>
I was trying to find the theorem about the length of the distinct prime factors of a randomly chosen large integer, but I could not find it to estimate the probability of finding a factor. (For some reason I have in my mind the idea that for a random number, the length of its successive distinct prime factors grows on average by multiples of 2.) <snip>[/QUOTE]Not quite what you're asking, but possibly related: [url=https://mathworld.wolfram.com/Hardy-RamanujanTheorem.html]Hardy-Ramanujan Theorem[/url] [url=https://mathworld.wolfram.com/Erdos-KacTheorem.html]Erdős-Kac Theorem[/url] [url=https://mathworld.wolfram.com/Golomb-DickmanConstant.html]Golomb-Dickman Constant[/url] Mersenne numbers have a number of well known properties which mark them as special, e.g. if p > 2 is prime, and q divides 2^p - 1, then q = 2*p*k + 1 for some positive integer k. I don't have statistics for the number of factors, or the size of the smallest factor of 2^p - 1, but I imagine someone does. I have, from time to time, gone through factor tables for small exponents looking for smallest factors q for which log(q)/(p*log(2)) is largest. The "nightmare" case is that 2^p - 1 is a P2 with log(q) > p*log(2)/3. It is a humbling thought that 2^1277 - 1 is known to be composite, a C385, but has yet to be factored. It is known to have no factors less than 2^68. AFAIK it could be a P2 with smallest factor > 2^425. |
Not quite the theorems I was after, but my last statement makes me realize that if we can delay stage 2 until PRP has run through its first 10^6 iterations or so, and again assuming that we can perform the PRP test with 3[SUP]2pE[/SUP] instead of with 3, then without too much difficulty and just double the time for stage 2, plus waiting until iteration m = 10[SUP]6[/SUP], we can perform this modified stage 2 with a tremendous number more factors. In fact, letting m be the iteration number of the PRP test, so m = 10[SUP]6[/SUP] or so:
(3[SUP]2pE*2^m[/SUP])*(3[SUP]2pEq[/SUP])*(3[SUP]2pEq[/SUP]-1) = (3[SUP]2pE(2^m+q)[/SUP])*(3[SUP]2pEq[/SUP]-1) On the left, the second and third factor differ just by 1, so they can be calculated simultaneously. The only two costs of this modified stage 2 are then actually getting to iteration m=10^6 and also multiplying by both of those factors 3[SUP]2pEq[/SUP] and 3[SUP]2pEq[/SUP]-1 mod Mp. So for double the multiplication and waiting for iteration m = 10^6, we get about 13.6 possible prime factors per iteration of stage 2. As a modification of this -- and I don't know whether this is possible or even helpful based on the increased cost -- If we cut the array containing 3[SUP]2pE*2[/SUP], 3[SUP]2pE*4[/SUP], 3[SUP]2pE*6[/SUP], ... in half (by cutting out the later half of the list) so that we can then store a second array of the same half-length containing 3[SUP]2pE*2pE*2[/SUP], 3[SUP]2pE*2pE*4[/SUP], 3[SUP]2pE*2pE*6[/SUP], ... then we can modify the above test to calculate (3[SUP]2pE*2^m[/SUP])*(3[SUP]2pE*2pEq[/SUP])*(3[SUP]2pEq[/SUP]-1) = (3[SUP]2pE(2^m+2pEq)[/SUP])*(3[SUP]2pEq[/SUP]-1) This guarantees that on the right, this new product has some number of prime factors which are distinct from factors of E and q. |
I think the following alternate Stage 2 algorithms are well thought out. I wrote this and sat with it for a few days before posting because I wanted to make sure I understood the Pm1 algorithms so that I do not accidentally mix Stage 1 and 2. These two modifications are what I would like to present. My apologies if they are flawed, and please tell me if you see a flaw. I currently do not see one.
Let Mp be the Mersenne number to be tested. Let B1 and B2 be chosen as usual. Assume B2 > B1 as when a great deal of ram is used. Let E be the exponent calculated in Stage 1 of Pm1. Suppose that Pm1 found no factors in stage 1. Let H = 3[SUP]2pE[/SUP] as usual. Modification 1: Suppose we are willing to perform the PRP test base H instead of base 3. Let m = the iteration number for the PRP test. We can presuppose m = 10^6. Before Stage 2 of Pm1, we perform the first m iterations of the PRP test using base H. Let M = H[SUP]2^m[/SUP] = 3[SUP]2pE*2^m[/SUP] = value of the m[SUP]th[/SUP] iteration of the PRP test mod Mp with base H instead of base 3. [B]Modification 1 of Stage 2 calculates the following:[/B] Π (M*H[SUP]q[/SUP]-1)*(H[SUP]q[/SUP]-1) mod Mp = Π (3[SUP]2pE*(2^m+q)[/SUP]-1)*(3[SUP]2pEq[/SUP]-1) mod Mp, where this product is taken over all q such that B1<q<=B2. The time cost of this Modification 1 of Stage 2 algorithm are 50% larger, plus the cost of arriving at iteration m = 10^6 based on this procedure, which according to my computer's speed adds about 25% more time depending on the length of Mp. Thus, in total, including Stage 1, this Pm1 test is about 38% more time expensive: 1. Calculate M. (First 1% of the PRP test.) 2. Calculate H[SUP]q[/SUP]. (First multiplication, from an array, and part of the original Stage 2 algorithm) 3. Multiply M by H[SUP]q[/SUP] and subtract 1. (Second multiplication, and the only new multiplication. The subtraction is essentially free.) 4. Multiply by H[SUP]q[/SUP]-1. (Third multiplication, and part of the original stage 2 algorithm) The gain is all the unknown factors of each 2^m+q, which have on average 13.45 distinct prime factors each, a huge step from the single prime factor q. Note that 2^m+q may have common factors with E. However, since 2^m+q has over 3*10[SUP]5[/SUP] digits, that does not prevent us from gaining many prime factors. [B]Modification 2 of Stage 2 calculates the following:[/B] Suppose we do not want to perform the PRP test base H and we insist on performing the PRP test base 3. Then let B3 = a (necessarily even) primorial such that B3 >> B2[SUP][/SUP]. Let M = H[SUP]B3[/SUP] Instead, Modification 2 of Stage 2 calculates the following: Π (M*H[SUP]q[/SUP]-1)*(H[SUP]q[/SUP]-1) mod Mp = Π (3[SUP]2pE(B3+q)[/SUP]-1)*(3[SUP]2pEq[/SUP]-1) mod Mp, where this product is taken over all q such that B1<q<=B2. The time cost of this Modification 2 of Stage 2 algorithm are 50% larger, plus fast calculation of M, even for B3 equaling a fairly large primorial. Thus in total, including Stage 1, this Pm1 test is about 25% more time expensive: 1. Calculate M. (Fairly fast.) 2. Calculate H[SUP]q[/SUP]. (First multiplication, from an array, and part of the original Stage 2 algorithm) 3. Multiply M by H[SUP]q[/SUP] and subtract 1. (Second multiplication, and the only new multiplication. The subtraction is essentially free.) 4. Multiply by H[SUP]q[/SUP]-1. (Third multiplication, and part of the original stage 2 algorithm) The gain is all the factors of each B3+q, none of which are divisible by factors of B3 nor by q. Primorials grow quickly in length, so with very few multiplications, B3 could be much, much larger than B2. Prime95 is already equipped to handle the multiplication of all small primes below 40 million. My understanding as of now is that it is better to have B3 >> B2. If B3 can be taken as 2pE or just p*B1#, each B3+q could be guaranteed to have no factors in common with E and q. I think the smallest viable value of B3 is the smallest primorial greater than B2[SUP]1/0.2[/SUP] based on a cursory understanding of the Dickman function. I do not want to dig into any other modifications (such as combining Modifications 1 and 2 or sacrificing time and splitting any of these into the usual Stage 2 and a true Stage 3) until we discuss above. At the risk of perhaps missing a basic mistake, I think these two modifications work and are beneficial. |
[QUOTE=JuanTutors;581580][B]Modification 1 of Stage 2 calculates the following:[/B]
1. Calculate M. (First 1% of the PRP test.) 2. Calculate H[SUP]q[/SUP]. (First multiplication, from an array, and part of the original Stage 2 algorithm) 3. Multiply M by H[SUP]q[/SUP] and subtract 1. (Second multiplication, and the only new multiplication. The subtraction is essentially free.) 4. Multiply by H[SUP]q[/SUP]-1. (Third multiplication, and part of the original stage 2 algorithm)[/quote] Current P-1 stage 2 uses 1 multiplication to handle two q's. In your new scheme, it will take 4 multiplications to handled 1 q (Hq, M*Hq, (Hq-1) * (M*Hq-1), and Π). [QUOTE=JuanTutors;581580]The time cost of this Modification 1 of Stage 2 algorithm are 50% larger, plus the cost of arriving at iteration m = 10^6 based on this procedure, which according to my computer's speed adds about 25% more time depending on the length of Mp. Thus, in total, including Stage 1, this Pm1 test is about 38% more time expensive: [/quote] That makes it roughly 8x costlier stage 2. Since stage 2 is already longer than stage 1, overall this will be about 5x more time than regular P-1. [QUOTE=JuanTutors;581580]The gain is all the unknown factors of each 2^m+q, which have on average 13.45 distinct prime factors each, a huge step from the single prime factor q. Note that 2^m+q may have common factors with E. However, since 2^m+q has over 3*10[SUP]5[/SUP] digits, that does not prevent us from gaining many prime factors.[/QUOTE] Sadly, most of the 13.45 factors are completely useless. A factor q will divide a p-1 with probability 1/q (well, technically 1/(q-1)). Which means, for example, a 10-digit q is only 1% as good as an 8-digit q. A 20-digit q is 1/10 billion as good as a 10-digit q. So, while we gain a lot of factors, there is not a chance in our life time to ever see one appear in a found factor's P-1 decomposition. You're better off to just increase the B2 bound in traditional stage 2. |
[QUOTE=axn;581595]Current P-1 stage 2 uses 1 multiplication to handle two q's. In your new scheme, it will take 4 multiplications to handled 1 q (Hq, M*Hq, (Hq-1) * (M*Hq-1), and Π). [/QUOTE]
Ah I missed the step of multiplying by MH^q-1, making it 4 multiplications, not 3. Can you lead me to an explainer that shows how one multiplication is used to handle two q's? I was under the impression that two multiplications are used to handle one q. One multiplication was used to calculate H^q and then another multiplication was used to calculate Π(H^q-1). [QUOTE]Sadly, most of the 13.45 factors are completely useless. A factor q will divide a p-1 with probability 1/q (well, technically 1/(q-1)) ...[/QUOTE] I was trying to find this theorem! Which one is it? [QUOTE] ... Which means, for example, a 10-digit q is only 1% as good as an 8-digit q. A 20-digit q is 1/10 billion as good as a 10-digit q. So, while we gain a lot of factors, there is not a chance in our life time to ever see one appear in a found factor's P-1 decomposition. You're better off to just increase the B2 bound in traditional stage 2.[/QUOTE] So that even though MH^q-1 has I believe on average ln(2) (? or 1?) factors between B2 = 10^7 and B2^2 = 10^14, the probability that any factor we happen to come upon will help us find a factor of Mp is about 10^-10.5, which is about 10^-(10.5-6) = 10^-4.5 times the probability that any factor between B1 and B2 helps. So for whatever cost we add via this method, ignoring all potential factors larger than B^2 due to diminishing benefits, we multiply the probability of finding a factor during this modified stage 2 by about 1+ln(2)*10^-4.5. Does that sound about right? Even if it's on the same order, that's pretty weak. |
[QUOTE=JuanTutors;581667]Can you lead me to an explainer that shows how one multiplication is used to handle two q's? I was under the impression that two multiplications are used to handle one q. One multiplication was used to calculate H^q and then another multiplication was used to calculate Π(H^q-1). [/quote]
I hate to do this to you, but... you can read "SPEEDING THE POLLARD AND ELLIPTIC CURVE METHODS OF FACTORIZATION". The whole paper is worth the read, but section 4.1 (Reducing the Cost of the Standard Continuation) is what you want specifically. [QUOTE=JuanTutors;581667]I was trying to find this theorem! Which one is it?[/quote] I don't know any theorem as such, but it follows from the fact that a number p divides every pth term in the number line. [QUOTE=JuanTutors;581667]So that even though MH^q-1 has I believe on average ln(2) (? or 1?) factors between B2 = 10^7 and B2^2 = 10^14, the probability that any factor we happen to come upon will help us find a factor of Mp is about 10^-10.5, which is about 10^-(10.5-6) = 10^-4.5 times the probability that any factor between B1 and B2 helps. So for whatever cost we add via this method, ignoring all potential factors larger than B^2 due to diminishing benefits, we multiply the probability of finding a factor during this modified stage 2 by about 1+ln(2)*10^-4.5. Does that sound about right? Even if it's on the same order, that's pretty weak.[/QUOTE] Not sure about the ln(2) factor, but your reasoning looks good. And the basic conclusion is that, you're better off increasing B2 rather than going this route. In fact, the Brent-Suyama extension is actually trying to do what you're trying to do but in a more efficient way. |
Maybe related, please see the thread about what I coined "PRP-1": [url]https://www.mersenneforum.org/showthread.php?t=23628[/url]
|
| All times are UTC. The time now is 17:36. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.