20161216, 05:34  #23 
"Curtis"
Feb 2005
Riverside, CA
11577_{8} Posts 
My experience with bigbound ECM runs is that halving the memory adds about 30% to the stage 2 runtime for a given B2 bound. The maxmem option will cut memory footprint by factors of 2 whilst increasing "k" (the number of chunks stage 2 is divided into) by factors of four.
EDIT: this behavior is how ECM treats regular numbers; for Mersenne candidates it uses finer steps, in ways I do not recall. maxmem might select k = 7 or 9 to fit under the memory boundary, which isn't possible for nonmersenne candidates. I'd like to hear about timings for stage 2 using kvalues over 30; if you run any, please report your results! Last fiddled with by VBCurtis on 20161216 at 05:37 
20161216, 10:20  #24 
Nov 2008
765_{8} Posts 
Agreed, but as was pointed out to me at least it gets run, slower is better than not at all

20161216, 13:29  #25 
(loop (#_fork))
Feb 2006
Cambridge, England
41×157 Posts 
I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the toosmall machine today. There is no urgency to determining the factors of 2^100611

20161216, 14:06  #26 
"GIMFS"
Sep 2002
Oeiras, Portugal
1493_{10} Posts 
That makes some sense, yes, but if we were to adhere too rigidly to that principle, no project would ever start. Like the famous diet that always starts tomorrow...

20161216, 17:16  #27  
Nov 2008
3·167 Posts 
Quote:
...so lets stop all testing until the year 2100 when we can do the next 84 years of work in 12 months You might not want to see a first time factor of a sub 10k exponent, but a fair few of us do 

20161216, 23:53  #28  
Nov 2008
3·167 Posts 
Quote:
See this thread 

20161217, 00:04  #29  
"Curtis"
Feb 2005
Riverside, CA
7·23·31 Posts 
Quote:
A future machine that has double the CPU speed and double the memory will do LL testing twice as fast, but bigbound ECM ~3 times as fast. So, in projectefficiency terms, do the work now that will benefit less from future speedups, and delay work that will benefit more. 

20161217, 00:29  #30  
Undefined
"The unspeakable one"
Jun 2006
My evil lair
3·7·13·23 Posts 
Quote:


20161217, 01:36  #31 
"Curtis"
Feb 2005
Riverside, CA
7×23×31 Posts 
Once ECM has all the memory it wants, the only future efficiency gained is from CPU speed. It's the combination of gains from more memory and more CPU that are worth waiting for, and that only applies to ECM bounds that desire more memory than the machine has.

20161217, 02:00  #32  
Sep 2003
5×11×47 Posts 
Quote:
I am using version 7.0.4 on Linux, compiled from source code. PS, in the cloud you can use machines with up to 2 TB of memory. 

20161217, 03:04  #33 
"Curtis"
Feb 2005
Riverside, CA
7·23·31 Posts 
Likewise, I have allocated 2830GB of ram on 32GB systems for P1, P+1, and regular ECM curves. No crashes.

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Modular restrictions on factors of Mersenne numbers  siegert81  Math  23  20140318 11:50 
Mersenne prime factors of very large numbers  devarajkandadai  Miscellaneous Math  15  20120529 13:18 
Factors of Mersenne Numbers  asdf  Math  17  20040724 14:00 
Factoring Smallest Fermat Numbers  Erasmus  Factoring  32  20040227 11:41 
Factors of Mersenne numbers ?  Fusion_power  Math  13  20031028 20:52 