mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > Factoring

Reply
 
Thread Tools
Old 2010-11-18, 02:36   #1
firejuggler
 
firejuggler's Avatar
 
Apr 2010
Over the rainbow

2×3×5×79 Posts
Default queestion about ecm-gmp

hi.
I'm trying to learn the rope of ECM-gmp software
I know (understood rather) that, for a proper chance to find a factor, you have to have a phase 1 and a phase 2 or equal lengh.

1) ecm -c 5 2e2 1e4 (phase 1 and 2 around 1.5 sec)
2) ecm -c 5 1e0 5e4 (phase 1 almost instand, phase 2 3 secs)

for a 8500 digits composite

wich one of the 2 has a better chance to find a factor?
firejuggler is offline   Reply With Quote
Old 2010-11-18, 03:14   #2
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17·251 Posts
Default

Add the -v (verbose) argument to print out a lot more information. From the output of how many curves you can expect to run before you find a factor of x digits, and then later how much time that should take, you can see which of different combinations has a better chance to find a factor either per curve or in a given time.
There are common levels to search for factors of a given size, such as 430 curves with B1=25e4 (250000) for 30 digit factors, that were once calculated to be the most efficient. What is best for your machine is best determined by some experimentation to see what sort of B1 values make for the lowest expected time for various factor digit sizes.
Note that GMP-ECM can automatically make good choices for B2 values by leaving the B2 blank.
Also note that an 8500 digit composite, unless it's specially constructed to have only small factors, is very unlikely to be fully factored by ECM in any reasonable amount of time (even years). Factors under 30 digits are pretty easy to find, and from there it gets harder and harder, such that 60-70+ digits are the top 10 ECM finds for the year.
For some more information, including much of the math behind it, see http://mersennewiki.org/index.php/Elliptic_curve_method
Mini-Geek is offline   Reply With Quote
Old 2010-11-18, 03:24   #3
firejuggler
 
firejuggler's Avatar
 
Apr 2010
Over the rainbow

237010 Posts
Default

my goal there ,is not to fully factor those composite. It is to find the most factor, in the less time possible. (and i a m aware of the -t switch).
As I am a regular contributor of aliquot sequence, i'm aware of the number of curve I should run.I'm just asking what is the best method for large number : a similar lenght step 1 and 2 or a non-existent step 1 and a 'twice long' B2.


edit : nevermind, since step 2 require a point found in step 1, it is best to use equal lenght step1 and 2

Last fiddled with by firejuggler on 2010-11-18 at 03:38
firejuggler is offline   Reply With Quote
Old 2010-11-18, 15:53   #4
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by firejuggler View Post
my goal there ,is not to fully factor those composite. It is to find the most factor, in the less time possible.
So find 0 factors in 0 time and you will have achieved your goal.

"most factors in least time possible" is nonsensical. The least time possible
is 0.

OTOH, you can select parameters to either maximize the probability of
finding a factor in any given fixed amount of time, or you can select
parameters to maximize the probability of finding a factor per unit time.
In the latter case the amount of time spent is not bounded.

Read my joint paper with Sam Wagstaff Jr:

A Practical Analysis of ECM

It was published in Mathematics of Computation.
R.D. Silverman is offline   Reply With Quote
Old 2010-11-18, 16:37   #5
axn
 
axn's Avatar
 
Jun 2003

13×359 Posts
Default

A question for Bob.
In your paper, the conclusion that time spent in stage 1 and stage 2 should be roughly the same for optimal performance is based on the assumption that stage 2 time is proportional to B2. Is my understanding correct?
If so, how does the fact that the current GMP-ECM stage 2 is _not_ proportional to B2, change that conclusion?
axn is offline   Reply With Quote
Old 2010-11-18, 17:16   #6
bdodson
 
bdodson's Avatar
 
Jun 2005
lehigh.edu

210 Posts
Default

Quote:
Originally Posted by axn View Post
A question for Bob.
In your paper, the conclusion that time spent in stage 1 and stage 2 should be roughly the same for optimal performance is based on the assumption that stage 2 time is proportional to B2. Is my understanding correct?
If so, how does the fact that the current GMP-ECM stage 2 is _not_ proportional to B2, change that conclusion?
Optimal time between step1 and step2 depends on how well the step2
performs. The program analyzed in Silverman-Wagstaff (ie., Peter
Montgomery's 1987 program) was far better in step2 than previous code,
as they describe.

I first started with one of Peter's binaries of the 1987 version, which he
circulated after he had his new ecm/fft, which had way-better-yet step2.
Compare the curve counts in Bob's paper with the ones in Peter's thesis;
totally different, not really comparable optimizations. The "rule" about equal
times is NOT a rule. In particular, gmp-ecm has step2 performance that is
closer to ecm/fft (except more portable, Peter's was based on MIPS/IRIX
assembly code) than to Peter's earlier 1987 version. -Bruce
bdodson is offline   Reply With Quote
Old 2010-11-18, 17:17   #7
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Default

Quote:
Originally Posted by axn View Post
A question for Bob.
In your paper, the conclusion that time spent in stage 1 and stage 2 should be roughly the same for optimal performance is based on the assumption that stage 2 time is proportional to B2. Is my understanding correct?
It is optimal to spend the same amount of time regardless of whether
stage 2 time is proportional to B2.

Suppose stage 2 is K times as fast as stage 1. That is to say, if
we take step 1 to B1 in time t0, then we can take step 2 to K*B1 also
in time t0. The optimal B2 limit is then K*B1. This will maximize the
probability of success per unit time.

If we use an FFT-based step 2, then we can take step 2 to limit B1^2
(up to a constant multiple that is implementation dependent) in the same
time that it took to take step 1 to B1.

At the time I wrote the paper, Peter Montgomery's Sparc code for ECM
was the fastest known to me. It was not FFT-based for step 2, but
it did achieve K = 100 (which is an often quoted value).
R.D. Silverman is offline   Reply With Quote
Old 2010-11-18, 17:26   #8
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

746010 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
It is optimal to spend the same amount of time regardless of whether
stage 2 time is proportional to B2.

Suppose stage 2 is K times as fast as stage 1. That is to say, if
we take step 1 to B1 in time t0, then we can take step 2 to K*B1 also
in time t0. The optimal B2 limit is then K*B1. This will maximize the
probability of success per unit time.

If we use an FFT-based step 2, then we can take step 2 to limit B1^2
(up to a constant multiple that is implementation dependent) in the same
time that it took to take step 1 to B1.

At the time I wrote the paper, Peter Montgomery's Sparc code for ECM
was the fastest known to me. It was not FFT-based for step 2, but
it did achieve K = 100 (which is an often quoted value).

Let me add that certain assumptions made then are not applicable now.
The main assumption (unstated) then is that you use the same machine for
step 1 and step 2.

Now, (e.g. the work at EPFL) step 1 is/can be done on machines
with special MP hardware, while step 2 is/can be done on other machines
with large memory.
R.D. Silverman is offline   Reply With Quote
Old 2010-11-18, 23:34   #9
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

164448 Posts
Default

Quote:
Originally Posted by axn View Post
Ok. I am having trouble with this part. What does it mean that stage 2 is K times as fast as stage 1?

Does it mean that if stage 1 can process x primes in unit time, then stage 2 can process Kx primes in unit time?
Huh???? Didn't you read what I wrote????
R.D. Silverman is offline   Reply With Quote
Old 2010-11-18, 23:36   #10
axn
 
axn's Avatar
 
Jun 2003

466710 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
Huh???? Didn't you read what I wrote????
Sorry. I deleted that post. Need time to think.
axn is offline   Reply With Quote
Old 2010-11-19, 02:11   #11
axn
 
axn's Avatar
 
Jun 2003

13·359 Posts
Default

Quote:
Originally Posted by R.D. Silverman View Post
Suppose stage 2 is K times as fast as stage 1.
Ok. Second attempt. Does the analysis require that K be a constant for a given implementation and number, but for different B1?

Quote:
Originally Posted by R.D. Silverman View Post
If we use an FFT-based step 2, then we can take step 2 to limit B1^2
(up to a constant multiple that is implementation dependent) in the same
time that it took to take step 1 to B1.
So K is a function of B1 in this case, right? And it doesn't affect the conclusion of the analysis?
axn is offline   Reply With Quote
Reply

Thread Tools


All times are UTC. The time now is 12:48.

Fri Aug 7 12:48:12 UTC 2020 up 21 days, 8:34, 1 user, load averages: 2.56, 2.50, 2.47

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.