![]() |
[QUOTE=bcp19;289297][I]p[/I]n#=6469693230 when n=10. log2(3^6468683230) is beyond my current ability to calculate.[/QUOTE] log2(3^6468683230) = 6468683230 * log2(3)
log2(3) is just over 1.5 (2^1.5 = 2 * sqrt(2) = 2.828). That would give you the exponent of the Mersenne number equal in magnitude to 3 raised to the power of the tenth primorial, approximately 10,000,000,000 -- beyond our current limits in prime95. [quote]Seeing as 3^2310(fifth primorial) is 1.4128e+1102 (roughly M3660) and 3^30030(sixth primorial) = 8.9388e+14327 (roughly M47590), it's fairly easy to comprehend that 3^ 8th or 9th primorial will exceed a typical Mp.[/quote]... and yet, the tenth primorial is simply the product of the first ten primes = 2*3*5*7*11*13*17*19*23*29 [quote]And calculating B1 is also beyond my current ability.[/quote]Suppose B1 = 30. That's pretty small; in fact, it's the default minimum value that prime95 substitutes for any smaller value. Stage 1 of P-1 computes 3^(product of prime powers up to B1), which for B1=30 is 3^(2^4*3^3*5^2*7*11*13*17*19*23*29), which is 3^(tenth primorial)*3^360 So, even the default minimum B1 requires computing a number which is 3^360 times 3^(tenth primorial). 3^(tenth primorial) is itself larger than the current upper limit of Mersenne number that prime95 can handle. The point is that even the smallest B1 value we use requires use of modular arithmetic, so there's no point in trying to save numbers calculated in stage 1 for use on other exponents. There is no duplication (from one exponent to another) after as few as 8-10 terms. |
[QUOTE=cheesehead;289314]
3^(2^4*3^3*5^2*7*11*13*17*19*23*29), which is 3^(tenth primorial)*3^360 [/QUOTE] Gotcha! :razz: [URL="http://www.mersenneforum.org/showpost.php?p=288883&postcount=20"]I told you so![/URL] Ha ha ha ha. Next time don't jump to byte the poor bcp guy... I bet every mathematician make this mistake at least 10 times in his life! :smile: |
What James said above got me thinking. Is it possible to start a P-1 search using your save data and just push B1 or B2 out a little and not have to do most of the work.
Because that would make sense to me as what Never Odd or Even seems to do to get his mammoth P-1 GHz-d: [URL="http://mersenne-aries.sili.net/exponent.php?exponentdetails=383838383"]http://mersenne-aries.sili.net/exponent.php?exponentdetails=383838383[/URL] Just curious. |
[QUOTE=KyleAskine;289362]What James said above got me thinking. Is it possible to start a P-1 search using your save data and just push B1 or B2 out a little and not have to do most of the work.[/QUOTE]Certainly. A downside of P-1 is that it doesn't know if it's found a factor until it does a GCD, which it only does at the end of stage1 and end of stage2 (assuming it found nothing in stage1). As long as you keep the savefile ([i]KeepPminus1SaveFiles=1[/i] in prime.txt and specify assignments as "Pminus1="), stage1 will continue right where it left off with successively higher B1s. That is what I am currently doing with [url=http://mersenne-aries.sili.net/M595900037]M595,900,037[/url], with worktodo like this:[code]Pminus1=N/A,1,2,595900037,-1,3000000,3000000,82
Pminus1=N/A,1,2,595900037,-1,3500000,3500000,82 Pminus1=N/A,1,2,595900037,-1,4000000,4000000,82 ... Pminus1=N/A,1,2,595900037,-1,6000000,6000000,82[/code]In a few months, when I've done stage1 up to (at least) B1=6,000,000 I'll move on to stage2, for which I can use the B2_start parameter of Pminus1= lines:[quote]Pminus1=k,b,n,c,B1,B2[,how_far_factored][,[b]B2_start[/b]][,"factors"][/quote]to specify progressively higher B2s until I get up to around [url=http://mersenne-aries.sili.net/prob.php?exponent=595900037&work=500&factorbits=82]B2=130,000,000[/url]. On a side note: doing P-1 this big with current hardware is silly. :smile: |
[QUOTE=LaurV;289337]Gotcha! :razz:
[URL="http://www.mersenneforum.org/showpost.php?p=288883&postcount=20"]I told you so![/URL] Ha ha ha ha.[/QUOTE] Yes, in my preceding post I should have written (corrections in boldface): 3^(2^4*3^3*5^2*7*11*13*17*19*23*29), which is 3^[B](tenth primorial*360) = [3^(tenth primorial)]^360[/B] So, even the default minimum B1 requires computing a number which is [B]the 360th power of 3^(tenth primorial)[/B]. [quote] Next time don't jump to byte the poor bcp guy...[/quote]Where did I "jump to byte" anyone? :-) My calculation wasn't correcting any mistake bcp19 had made other than an apparent assumption that part of the stage 1 computation could be usefully saved for reuse on another exponent. I was supplying a computation that neither bcp19 nor I had (apparently) actually made yet. In fact, before I performed the computation for that posting I had expected that the maximum B1 at which a saving of partial computation could be useful would be higher than 30. So I surprised myself, too. The mistake I made in my calculation didn't invalidate the point I intended to communicate: that even the smallest B1 used in prime95 requires modular arithmetic, which makes it impractical to save a partial computation large enough to usefully speed up stage 1 for a different exponent. - - - - - - - - - - [QUOTE=James Heinrich;289368]On a side note: doing P-1 this big with current hardware is silly. :smile:[/QUOTE] "Silly", along with "Miniscule", is some of our middle names here on mersenneforum.org |
[QUOTE=cheesehead;289376]"Silly", along with "Miniscule", is some of our middle names here on mersenneforum.org[/QUOTE]
Does that mean I have to be bsp19 now? |
[QUOTE=bcp19;289383]Does that mean I have to be bsp19 now?[/QUOTE]
[B]binary space partitioning[/B] ([B]BSP[/B]) is a method for recursively subdividing a space into convex sets by hyperplanes. Isn't that what you are doing? :smile: |
hrrrr... hating anything involving bsp's (board support package for winCE and win mobile, system programmers know what I am talking about...)
edit: and yes, that is what [I][B]I[/B][/I] am doing, wasting my days and getting white hair... |
[QUOTE=LaurV;289423]hrrrr... hating anything involving bsp's (board support package for winCE and win mobile, system programmers know what I am talking about...)
edit: and yes, that is what [I][B]I[/B][/I] am doing, wasting my days and getting white hair...[/QUOTE] Just keep in mind all that nicely partitioned space you're producing, making the world a more-organized place. :-)- |
I've been doing some reading, and have a few questions regarding P-1 and ECM. From the usage of them on the site, it appears to be a given that P-1 is more efficient than ECM for time spent vs factor found for exponents that were not factored during TF and prior to LL. So, while P-1 will find all factors 2kp+1 where the 2nd largest factor of k < B1 and the largest factor of k < B2, will the ECM also find all factors? (If I am asking this question badly, given that factor F is 22 digits long and can be found with B1=50000 using 300 curves[25 digit ECM], will F also be found using B1=250000 using 700 curves[30 digit ECM] and higher?) At what point is it better to use ECM for factorization and how do you determine what level of ECM to start at?
|
[QUOTE=bcp19;291113]I've been doing some reading, and have a few questions regarding P-1 and ECM. From the usage of them on the site, it appears to be a given that P-1 is more efficient than ECM for time spent vs factor found for exponents that were not factored during TF and prior to LL. So, while P-1 will find all factors 2kp+1 where the 2nd largest factor of k < B1 and the largest factor of k < B2, will the ECM also find all factors? (If I am asking this question badly,[/QUOTE]No, it's clear. :-)
[quote]given that factor F is 22 digits long and can be found with B1=50000 using 300 curves[25 digit ECM], will F also be found using B1=250000 using 700 curves[30 digit ECM] and higher?)[/quote]Not necessarily. P-1 is a deterministic algorithm; it will find all factors within the search area defined by B1 and B2. But ECM is a probabilistic algorithm; it examines a random selection of elliptic curves within the search area defined by its B1/B2. The more curves examined, the greater its chances of finding a factor that is with the search area. But ECM is never [I]guaranteed[/I] to find a particular factor that exists within the search area, as P-1 is. [quote]At what point is it better to use ECM for factorization and how do you determine what level of ECM to start at?[/quote]As the search area defined by B1/B2 expands, P-1's performance bogs down faster than ECM's. That is, as B1/B2 go higher, P-1's guarantee of finding a factor in the search area is handicapped by its longer and longer execution time to do so. By contrast, ECM's execution time has a lower proportionality to search area size, so at sufficiently high bounds its probability of finding a factor [I]within a given time of execution[/I] beats P-1's probability by a larger and larger margin. (By "P-1's probability" here, I'm referring to the probability that a factor actually exists in the search area. P-1's certainty of finding a factor doesn't help if no such factor exists within the search area. OTOH, ECM's probability of finding a factor is [probability that a factor exists in the search area] * [probability that one of ECM's random curves will find a factor that does exist in the search area]) There are general guidelines for the optimum levels at which to switch from P-1 to ECM, but I don't have a link handy. |
| All times are UTC. The time now is 22:55. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.