![]() |
|
|
#34 | |
|
"Mihai Preda"
Apr 2015
55B16 Posts |
Quote:
|
|
|
|
|
|
|
#35 |
|
"Mihai Preda"
Apr 2015
137110 Posts |
For fun here's the factorization of 2^1620 - 1:
(done with the nice applet https://www.alpertron.com.ar/ECM.HTM ) Tiny factors: 3^5 × 5^2 × 7 × 11 × 13 × 19 × 31 × 37 × 41 × 61 × 73 × 109 × 151 × 163 × 181 × 271 × 331 × 541 × 631 × 811 Average factors, < 3M (e.g. could be covered in first-stage P-1) 1321 × 1621 × 2593 × 6481 × 9721 × 15121 × 23311 × 30241 × 49681 × 54001 × 71119 × 87211 × 135433 × 246241 × 262657 × 279073 × 348031 × 537841 Large factors [3M, 4B] (could be covered in second-stage P-1) 3 618757 × 6 876901 × 18 837001 × 29 247661 × 74 967931 × 97 685839 × 106 979941 × 168 410989 × 272 010961 × 1511 474581 × 2437 880491 × 2458 695061 × 3934 029061 Larger factors, > 4B: Code:
4977 454861
448217 524891
9 893662 806061
10 360573 664851
49 971617 830801
165 041853 060421
385 838642 647891
1969 543281 137041
11096 527935 003481
166242 935471 754241
1583 423452 213582 178911 805893 942695 192421 (40 digits) 4343 952637 722706 853771 280086 533392 805261 (40 digits) 17 645665 556213 400107 370602 081155 737281 406841 (44 digits) PS: and when you think that the expected number of factors for a number of that magnitude, given by log(log(x)), is 7. Last fiddled with by preda on 2018-09-06 at 10:55 Reason: add note |
|
|
|
|
|
#36 | |
|
"Forget I exist"
Jul 2009
Dumbassville
26×131 Posts |
Quote:
|
|
|
|
|
|
|
#37 | |
|
Sep 2003
A1916 Posts |
Quote:
|
|
|
|
|
|
|
#38 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
123628 Posts |
I can confirm that running something like factorization of 2^3600-1 will eat up core-days on an i3, i7, or core-2-duo. Also had an instance of a system unresponsive with black screen until restart, a couple days into two browsers doing different powers. Mercifully, the script is capped at somewhere between 2^18000-1 and 2^36000-1. It can be very useful at hundreds of digits. It bogs down around130 bit size factors.
Last fiddled with by kriesel on 2018-09-06 at 14:14 |
|
|
|
|
|
#39 |
|
Jun 2003
5,051 Posts |
As nice as the applet and P95 (and even GMP-ECM) are, you wouldn't actually want to run ECM on these "well-known" numbers. somebody would have already done it and made it available in factordb.
|
|
|
|
|
|
#40 |
|
Jun 2003
5,051 Posts |
I have been following this thread with interest. This is Brent-Suyama-esque in the way it generates additional stage 2 factors using algebraic factorization [BS uses x^d-1, fixed d variable x, while this uses 2^k-1, variable k]
The basic idea is intriguing. I am not convinced that it is superior to standalone P-1 + PRP from a "speedup the next Mersenne prime discovery" perspective, but it is good if we want to maximize the number of factors found. Downsides: 1) We are no longer getting a nice standard base-3 fermat residue. The residue is a non-standard PRP test (not Fermat), with a non-standard base. 2) Doublecheck will have to do the stage 1 for no benefit. This could be a net loss to project. 3) Number of factors in 2^k-1 is less important than the quality of factors. You want to have smaller factors rather than larger factors. The probability of a prime p dividing (f-1) is 1/(p-1), so really large factors (say > 10^15) are practically useless. 4) The deeper you go into the PRP test, the less useful a factor becomes (I believe you mentioned this already). 5) As noted, Stage 1 is still not protected by GEC, but I guess there is always the doublecheck. I'll ignore the logistics difficulties of combining P-1 + PRP into a single work unit, because if it gives a net benefit, it might be worth it. Practically, to compare this scheme with the traditional P-1, we can assume that we'll do the at least the same B1/B2 with this new method. So stage 1 will use the same 3^2*P*E as P-1, and stage 2 will multiply at least all the iterations that will together cover all primes p, B1 < p <= B2. Again, as you noted, every prime p will divide 2^(p-1)-1 by force, and sometimes much smaller exponent [in pari/gp we'd do znorder( Mod(2,p) )]. We can reduce the number of k's by carefully selecting k's that share multiple stage 2 primes (there is a simple algorithm to do this). Nonetheless, we must do at about B2 iterations to fully cover all the stage 2 primes. This is much larger than traditional P-1 stage 2. (On the plus sides, it doesn't require much memory). But once we're committed to the PRP test, we can keep going past B2, and multiplying any and all "interesting" k's, increasing the chance of saving a double check, at least. To put it all together and come up with a theoretical model to evaluate the efficiency is beyond me, unfortunately :-( |
|
|
|
|
|
#41 |
|
"Mihai Preda"
Apr 2015
55B16 Posts |
Good news.
Indeed the selection of the iterations K to test is critical for the effectiveness of the method. After a bit of work, I think I found a good algorithm for K selection. I did an empirical analysis, and here is the good news: (modeled for an 88M exponent, B1=2M, selecting 2% of iterations for P-1 testing): (so the cost is about 18M multiplications, ignoring the cost of GCDs). Coverage: Code:
1121674 (100.00%) of 1121674 primes between 2M and 20M 1162084 (99.92%) of 1163047 primes between 20M and 40M 536610 (47.55%) of 1128461 primes between 40M and 60M 212469 (19.19%) of 1107267 primes between 60M and 80M 151103 (13.84%) of 1092073 primes between 80M and 100M 117082 (10.84%) of 1080193 primes between 100M and 120M 95435 (8.91%) of 1070551 primes between 120M and 140M 79350 (7.47%) of 1062259 primes between 140M and 160M 68294 (6.47%) of 1055927 primes between 160M and 180M 58575 (5.59%) of 1048552 primes between 180M and 200M 51329 (4.92%) of 1043603 primes between 200M and 220M 46313 (4.46%) of 1039004 primes between 220M and 240M 42141 (4.07%) of 1034316 primes between 240M and 260M 38832 (3.77%) of 1030209 primes between 260M and 280M 36052 (3.51%) of 1026256 primes between 280M and 300M 33383 (3.26%) of 1022881 primes between 300M and 320M 31257 (3.53%) of 886228 primes between 320M and 340M 29459 (4.63%) of 636085 primes between 340M and 360M 28263 (4.45%) of 634450 primes between 360M and 380M 26301 (4.15%) of 633299 primes between 380M and 400M 24753 (3.92%) of 631557 primes between 400M and 420M 23871 (3.79%) of 629827 primes between 420M and 440M 22363 (3.56%) of 628260 primes between 440M and 460M 21373 (3.41%) of 627227 primes between 460M and 480M 20811 (3.33%) of 625672 primes between 480M and 500M 19606 (3.14%) of 623941 primes between 500M and 520M 19049 (3.06%) of 623487 primes between 520M and 540M 17931 (2.88%) of 622252 primes between 540M and 560M 17645 (2.84%) of 621314 primes between 560M and 580M 16852 (2.72%) of 620060 primes between 580M and 600M 1283604 (2.82%) of 45519167 primes between 600M and above big total 5421161 Last fiddled with by preda on 2018-09-10 at 00:01 Reason: add total |
|
|
|
|
|
#42 | |
|
"William"
May 2003
New Haven
93E16 Posts |
Quote:
Code:
(2^(1)+2^((1+1)/2)+1) (2^(1)-2^((1+1)/2)+1) (2^(1)+1) (2^(1)-1) (2^(3)+2^((3+1)/2)+1)/(2^(1)-2^((1+1)/2)+1) (2^(3)-2^((3+1)/2)+1)/(2^(1)+2^((1+1)/2)+1) (2^(3)+1)/(2^(1)+1) (2^(3)-1)/(2^(1)-1) (2^(3^2)+2^((3^2+1)/2)+1)/(2^(3)-2^((3+1)/2)+1) (2^(3^2)-2^((3^2+1)/2)+1)/(2^(3)+2^((3+1)/2)+1) (2^(3^2)+1)/(2^(3)+1) (2^(3^2)-1)/(2^(3)-1) (2^(3^3)+2^((3^3+1)/2)+1)/(2^(3^2)-2^((3^2+1)/2)+1) (2^(3^3)-2^((3^3+1)/2)+1)/(2^(3^2)+2^((3^2+1)/2)+1) (2^(3^3)+1)/(2^(3^2)+1) (2^(3^3)-1)/(2^(3^2)-1) (2^(3^4)+2^((3^4+1)/2)+1)/(2^(3^3)-2^((3^3+1)/2)+1) (2^(3^4)-2^((3^4+1)/2)+1)/(2^(3^3)+2^((3^3+1)/2)+1) (2^(3^4)+1)/(2^(3^3)+1) (2^(3^4)-1)/(2^(3^3)-1) (2^(5)+2^((5+1)/2)+1)/(2^(1)-2^((1+1)/2)+1) (2^(5)-2^((5+1)/2)+1)/(2^(1)+2^((1+1)/2)+1) (2^(5)+1)/(2^(1)+1) (2^(5)-1)/(2^(1)-1) (2^(3*5)+2^((3*5+1)/2)+1)*(2^(1)+2^((1+1)/2)+1)/((2^(5)-2^((5+1)/2)+1)*(2^(3)-2^((3+1)/2)+1)) (2^(3*5)-2^((3*5+1)/2)+1)*(2^(1)-2^((1+1)/2)+1)/((2^(5)+2^((5+1)/2)+1)*(2^(3)+2^((3+1)/2)+1)) (2^(3*5)+1)*(2^(1)+1)/((2^(5)+1)*(2^(3)+1)) (2^(3*5)-1)*(2^(1)-1)/((2^(5)-1)*(2^(3)-1)) (2^(3^2*5)+2^((3^2*5+1)/2)+1)*(2^(3)+2^((3+1)/2)+1)/((2^(3*5)-2^((3*5+1)/2)+1)*(2^(3^2)-2^((3^2+1)/2)+1)) (2^(3^2*5)-2^((3^2*5+1)/2)+1)*(2^(3)-2^((3+1)/2)+1)/((2^(3*5)+2^((3*5+1)/2)+1)*(2^(3^2)+2^((3^2+1)/2)+1)) (2^(3^2*5)+1)*(2^(3)+1)/((2^(3*5)+1)*(2^(3^2)+1)) (2^(3^2*5)-1)*(2^(3)-1)/((2^(3*5)-1)*(2^(3^2)-1)) (2^(3^3*5)+2^((3^3*5+1)/2)+1)*(2^(3^2)+2^((3^2+1)/2)+1)/((2^(3^2*5)-2^((3^2*5+1)/2)+1)*(2^(3^3)-2^((3^3+1)/2)+1)) (2^(3^3*5)-2^((3^3*5+1)/2)+1)*(2^(3^2)-2^((3^2+1)/2)+1)/((2^(3^2*5)+2^((3^2*5+1)/2)+1)*(2^(3^3)+2^((3^3+1)/2)+1)) (2^(3^3*5)+1)*(2^(3^2)+1)/((2^(3^2*5)+1)*(2^(3^3)+1)) (2^(3^3*5)-1)*(2^(3^2)-1)/((2^(3^2*5)-1)*(2^(3^3)-1)) (2^(3^4*5)+2^((3^4*5+1)/2)+1)*(2^(3^3)+2^((3^3+1)/2)+1)/((2^(3^3*5)-2^((3^3*5+1)/2)+1)*(2^(3^4)-2^((3^4+1)/2)+1)) (2^(3^4*5)-2^((3^4*5+1)/2)+1)*(2^(3^3)-2^((3^3+1)/2)+1)/((2^(3^3*5)+2^((3^3*5+1)/2)+1)*(2^(3^4)+2^((3^4+1)/2)+1)) (2^(3^4*5)+1)*(2^(3^3)+1)/((2^(3^3*5)+1)*(2^(3^4)+1)) (2^(3^4*5)-1)*(2^(3^3)-1)/((2^(3^3*5)-1)*(2^(3^4)-1)) |
|
|
|
|
|
|
#43 |
|
"Mihai Preda"
Apr 2015
3·457 Posts |
Let me say a bit more about the algorithm for selecting the iterations K that produced the above coverage numbers.
For any prime "p", let's call z(p) the multiplicative order of 2 modulo p. In pari-gp, this is znorder(Mod(2,p)), as mentioned previously by @axn (thanks!). So k=z(p) is the smallest value such that 2^k - 1 has "p" as a factor. So the prime "p" can be tested (or "covered") in any iteration k that is a multiple of z(p). (this way, a given iteration k may cover multiple primes at once) We pay a fixed cost for each test (which can be approximated by the cost of one multiplication). A test that is successful early (at low K) saves more than a test that is successful late (at high K), because as soon as a test is successful we stop the PRP and save the portion of PRP not yet done. OTOH a low K has a tendency to cover fewer primes than a high K, simply because a lower K has fewer factors, thus less capacity to cover multiple primes at once. Now let's model the value of [a test at] iteration K. Let's start by modeling the value of a prime "p" being covered. All the primes below B1 are always included, so covering such a prime p <= B1 has no [additional] value (or very low value). For the primes above B1, we want to focus on the low values. We want compact cover just above B1 extending as far up as possible. Beyond that, any additional primes covered are still good but with decreasing benefit. In my selection algorithm, I model this by having a function which gives the value of [covering] a prime, e.g.: value(p) := 0 if p <= B1, 1 if p in (B1, 40M], 20M/(p - 20M) otherwise. For example with B1=2M, value(p in [2M, 40M]) is 1, value(60M) is 1/2, value(100M) is 1/4, value(1G) is 1/49. Let's now look at the value of a test at iteration K. rawValue(k) = sum of value(p) where p prime and z(p) | k. Which means: an iteration K covers all the primes whose order z(p) divides K. The value of K is the sum of the value of the covered primes. But now we must combine the raw value of an iteration K, with the amount of PRP saved ("early K are better"). So we define, for exponent E, and k in [1, E). value(k, E) = rawValue(k) * (E - k) / E. Or we can model the fact that a factor, even if found at the very end of the PRP, still has some value (e.g. by removing the need for a double-check) like this: fancyValue(k, E) = value(k, E * 1.05). Of note is that any prime where z(p) > E cannot be covered during the PRP, thus need not be considered. OTOH there are some very large primes, much larger then E, with z(p) < E, which thus can be considered (although the value of such large primes is low). One more thing, the value(k, E) above was defined over "all primes". Let's now explicitly name the set of primes that it works over, and it becomes: value(activePrimes, k, E). Now the "K selection" algorithm works like this: 1. given E, set activePrimes := all primes with z(p) < E; 2. iterativelly: - select [one] K which maximizes fancyValue(activePrimes, k, E). ("best K for activePrimes") - remove all the primes covered by this K from activePrimes. - repeat, until the desired number of Ks is obtained. 3. sort the selected Ks in ascending order. Test each K as early as the PRP reaches it. Last fiddled with by preda on 2018-09-10 at 01:19 |
|
|
|
|
|
#44 |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
And for a taste of the Ks:
Code:
Best Ks: 24504480 107.4 23734620 84.4 24116400 80.1 15897420 67.4 26070660 63.5 16581600 58.1 25882560 55.8 19445580 55.0 18181800 53.0 18354000 52.6 Worst: 23578995 0.5 48513306 0.5 23579024 0.5 23919236 0.5 45576708 0.5 23579039 0.5 40528986 0.5 8821628 0.5 12839602 0.5 13121614 0.5 Smallest: 650179 2.0 681301 2.0 711941 1.0 783911 2.0 786881 1.0 788971 1.1 789877 1.0 792427 2.0 826367 1.0 831713 1.2 Largest 68002500 0.9 67631844 0.9 67450152 1.1 67107540 1.0 66736692 0.6 66671052 1.0 66278292 0.7 66246012 1.0 66172740 1.1 66099300 0.7 Last fiddled with by preda on 2018-09-10 at 01:39 |
|
|
|