20200911, 13:49  #12 
Apr 2010
Over the rainbow
2·5·11·23 Posts 
From M1 to M500, 83 fully factored M(prime), not counting prime themselves. 30 have 2 factors. 17 have 3 factors. Therefore 36 have 4 or more factors.
Last fiddled with by firejuggler on 20200911 at 14:06 
20200923, 02:04  #14 
Random Account
"Norman D. Powell"
Aug 2009
Indiana, USA.
2·3·313 Posts 
M1277 is 384 decimal digits long, if memory serves. If it were "smooth" as some here like to say, this might have been done and over a long time ago. Four LL tests says it is composite. A P1 test back in 2017 used a (5trillion + 3) B1 bound. The B2, (400trillion + 241).
One individual I am aware of spent months running Stage 1 ECM's with Prime95 and Stage 2 with GMPECM. Unless somebody could coax Google into trying this on their quantum supercomputer, M1277 will likely remain an enigma for quite some time to come. It must have one, or more, really large factors which we do not have the tech to reach currently. 
20200923, 03:40  #15  
"Curtis"
Feb 2005
Riverside, CA
3×1,559 Posts 
Quote:
There is still plenty of ECM to do on this number before it's "ready" for SNFS. Anyone can fire up curves at, say, B1 = 6e9 or bigger and have a go and few of us would be surprised if someone found a factor in just that way. If another quarter million or so (I didn't actually calculate how many) such curves fail to find a factor, then we head off to SNFS when someone feels like starting it. We have the tech to do SNFS on it right now, but not the patience. It would take a cluster to solve the matrix, but those exist too. The sieving is a really long task, which is why nobody has bothered to try (and also why more ECM is worth the effort) but CADO can do it. So, no, it's not true that we don't have the tech. 

20200923, 13:55  #16  
Random Account
"Norman D. Powell"
Aug 2009
Indiana, USA.
2·3·313 Posts 
Quote:
6,000,000,000 for B1. I believe the ruleofthumb is B2 = B1 * 100. Of course, that is not set in stone. A person could go higher if they choose. I have an older machine that sits in a corner I could do this with. It is not fast or elegant but it gets the job done. It still has the 29.x version of Prime95 on it. I could let this run for months, even years. I will give this a go. The only requirement is to not let it run out of work. 

20200923, 15:02  #17 
"Curtis"
Feb 2005
Riverside, CA
11105_{8} Posts 
Actually, I assumed GMPECM for the runs, as it is dramatically faster than P95 for this very small (by GIMPS standards) number. You'll find that B2 is much much more than 100* B1 when using GMPECM. Memory use is also higher, though.
There's another M1277 thread where the procedure for using P95 for stage 1 and GMPECM for stage 2 is laid out that's the fastest way for this number. If you're doing more than a handful of curves, I strongly suggest you use GMPECM (windows or linux) for stage 2. I think I ran 500 curves in just this way a couple years back; I left an old Core 2 Quad on it for a few months. 
20200923, 15:27  #18  
Random Account
"Norman D. Powell"
Aug 2009
Indiana, USA.
2×3×313 Posts 
Quote:
This will give me something to do and to think about. I hit the big 65 in 13 days. Thank you for the feedback. 

20200923, 15:34  #19 
"Oliver"
Sep 2017
Porta Westfalica, DE
439 Posts 
If memory is not sufficient, you could try ecm maxmem 3072. That will limit it to 3 GB of RAM usage.
For me, GMPECM reports Estimated memory usage: 8.08GB with B2 = 51,985,969,455,438 (ECMdefault at B1 = 2e9). For ecm maxmem 3072 it says Estimated memory usage: 1.93GB, so this should be totally fine for that system. 
20200923, 15:42  #20 
"Oliver"
Sep 2017
Porta Westfalica, DE
439 Posts 
Sorry, I went for the wrong B1.
For me, GMPECM reports Estimated memory usage: 16.50GB with B2 = 262,752,699,834,252 (ECMdefault at B1 = 6e9). For ecm maxmem 3072 it says Estimated memory usage: 1.93GB, so this should be totally fine for that system. 
20200923, 17:49  #21  
Random Account
"Norman D. Powell"
Aug 2009
Indiana, USA.
2×3×313 Posts 
Quote:
14 hours for each stage one curve on that machine with B1 = 6e9. I am loading it in groups of 10, a single curve in each work line. 5.8 days for the group. Then I will go to GMPECM. Once finished, repeat the process. 

20200923, 20:35  #22 
"Oliver"
Sep 2017
Porta Westfalica, DE
667_{8} Posts 
Since you have a dual core CPU, you should be able to increase efficiency by doing stage 1 of a set \(A\) and stage 2 of a set \(B\) in parallel, if you like!
Prime95's parallelization is not efficient at all at those tiny FFTs. The OpenMP functionality of GMPECM only works in stage 2 currently (I'd like to be corrected), but only helps in certain substeps. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
inconsistent timestamp intervals in prime.log  ixfd64  Software  1  20201101 20:27 
Could I run this py python script on a supercomputer?  Ghost  Information & Answers  4  20181130 04:07 
M1277  no factors below 2^65?  DanielBamberger  Data  17  20180128 04:21 
search for MMM127 small factors?  Orgasmic Troll  Miscellaneous Math  7  20060611 15:38 
Random numbers and proper factors  mfgoode  Math  20  20060205 02:09 