mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2016-12-16, 05:34   #23
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

115778 Posts
Default

My experience with big-bound ECM runs is that halving the memory adds about 30% to the stage 2 runtime for a given B2 bound. The -maxmem option will cut memory footprint by factors of 2 whilst increasing "k" (the number of chunks stage 2 is divided into) by factors of four.
EDIT: this behavior is how ECM treats regular numbers; for Mersenne candidates it uses finer steps, in ways I do not recall. -maxmem might select k = 7 or 9 to fit under the memory boundary, which isn't possible for non-mersenne candidates.

I'd like to hear about timings for stage 2 using k-values over 30; if you run any, please report your results!

Last fiddled with by VBCurtis on 2016-12-16 at 05:37
VBCurtis is online now   Reply With Quote
Old 2016-12-16, 10:20   #24
Gordon
 
Gordon's Avatar
 
Nov 2008

7658 Posts
Default

Quote:
Originally Posted by GP2 View Post
Yes, but in doing so you lose some of the benefit of using gmp-ecm

Let's say you want to do an exponent to B2 = 100,000 but you only have enough memory to do B2 = 1000 (I'm using ridiculously low values in order to simplify the example).
Agreed, but as was pointed out to me at least it gets run, slower is better than not at all
Gordon is offline   Reply With Quote
Old 2016-12-16, 13:29   #25
fivemack
(loop (#_fork))
 
fivemack's Avatar
 
Feb 2006
Cambridge, England

41×157 Posts
Default

Quote:
Originally Posted by Gordon View Post
Agreed, but as was pointed out to me at least it gets run, slower is better than not at all
I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the too-small machine today. There is no urgency to determining the factors of 2^10061-1
fivemack is offline   Reply With Quote
Old 2016-12-16, 14:06   #26
lycorn
 
lycorn's Avatar
 
"GIMFS"
Sep 2002
Oeiras, Portugal

149310 Posts
Default

That makes some sense, yes, but if we were to adhere too rigidly to that principle, no project would ever start. Like the famous diet that always starts tomorrow...
lycorn is offline   Reply With Quote
Old 2016-12-16, 17:16   #27
Gordon
 
Gordon's Avatar
 
Nov 2008

3·167 Posts
Default

Quote:
Originally Posted by fivemack View Post
I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the too-small machine today. There is no urgency to determining the factors of 2^10061-1
By that reckoning I should never have run my first LL test on that P-90 machine back in 1997, as what took 3 days then takes 5 minutes now...


...so lets stop all testing until the year 2100 when we can do the next 84 years of work in 12 months

You might not want to see a first time factor of a sub 10k exponent, but a fair few of us do
Gordon is offline   Reply With Quote
Old 2016-12-16, 23:53   #28
Gordon
 
Gordon's Avatar
 
Nov 2008

3·167 Posts
Default

Quote:
Originally Posted by fivemack View Post
I think it is better to not run a job on a 16GB machine in 2016 and run it on a cheap 256GB machine in 2022, than to burn sixteen times the coal getting the job done on the too-small machine today. There is no urgency to determining the factors of 2^10061-1
Have reread your post, this isn't a 16 gig machine it's 32 gig, there's an obscure bug in gmp-ecm which makes it (most times) fail when trying to allocate over 16 gig of ram.

See this thread
Gordon is offline   Reply With Quote
Old 2016-12-17, 00:04   #29
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

7·23·31 Posts
Default

Quote:
Originally Posted by Gordon View Post
By that reckoning I should never have run my first LL test on that P-90 machine back in 1997, as what took 3 days then takes 5 minutes now...


...so lets stop all testing until the year 2100 when we can do the next 84 years of work in 12 months

You might not want to see a first time factor of a sub 10k exponent, but a fair few of us do
No, you're missing the point. Most of the tasks we do on this forum scale with CPU speed, but running ECM on machines with too-small memory require time that scales with both CPU speed AND memory size. So, waiting for future machines for these ECM tasks results in speedups much greater than the speedups for LL testing, or small-bound ECM, or any number of the other things we collectively like to do.

A future machine that has double the CPU speed and double the memory will do LL testing twice as fast, but big-bound ECM ~3 times as fast. So, in project-efficiency terms, do the work now that will benefit less from future speedups, and delay work that will benefit more.
VBCurtis is online now   Reply With Quote
Old 2016-12-17, 00:29   #30
retina
Undefined
 
retina's Avatar
 
"The unspeakable one"
Jun 2006
My evil lair

3·7·13·23 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
A future machine that has double the CPU speed and double the memory will do LL testing twice as fast, but big-bound ECM ~3 times as fast. So, in project-efficiency terms, do the work now that will benefit less from future speedups, and delay work that will benefit more.
And in six years time with the shiny new 256GB computer (using your figures) we can make the exact same argument, don't run ECM yet, wait for a time when it will be more efficient. Then there becomes no time at which we can do the test, ever. Because there will always be some future time when it will be more efficient.
retina is online now   Reply With Quote
Old 2016-12-17, 01:36   #31
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

7×23×31 Posts
Default

Once ECM has all the memory it wants, the only future efficiency gained is from CPU speed. It's the combination of gains from more memory and more CPU that are worth waiting for, and that only applies to ECM bounds that desire more memory than the machine has.
VBCurtis is online now   Reply With Quote
Old 2016-12-17, 02:00   #32
GP2
 
GP2's Avatar
 
Sep 2003

5×11×47 Posts
Default

Quote:
Originally Posted by Gordon View Post
there's an obscure bug in gmp-ecm which makes it (most times) fail when trying to allocate over 16 gig of ram.
I very much doubt it, because I routinely use more than that. In fact, I have one job using 125 GB of memory right now. That is for P−1 rather than ECM, but it shouldn't make a difference.

I am using version 7.0.4 on Linux, compiled from source code.

PS, in the cloud you can use machines with up to 2 TB of memory.
GP2 is offline   Reply With Quote
Old 2016-12-17, 03:04   #33
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

7·23·31 Posts
Default

Likewise, I have allocated 28-30GB of ram on 32GB systems for P-1, P+1, and regular ECM curves. No crashes.
VBCurtis is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Modular restrictions on factors of Mersenne numbers siegert81 Math 23 2014-03-18 11:50
Mersenne prime factors of very large numbers devarajkandadai Miscellaneous Math 15 2012-05-29 13:18
Factors of Mersenne Numbers asdf Math 17 2004-07-24 14:00
Factoring Smallest Fermat Numbers Erasmus Factoring 32 2004-02-27 11:41
Factors of Mersenne numbers ? Fusion_power Math 13 2003-10-28 20:52

All times are UTC. The time now is 04:00.


Sat Oct 16 04:00:00 UTC 2021 up 84 days, 22:28, 0 users, load averages: 1.51, 1.22, 1.11

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.