mersenneforum.org  

Go Back   mersenneforum.org > Other Stuff > Archived Projects > ElevenSmooth

 
 
Thread Tools
Old 2003-12-06, 17:39   #1
wblipp
 
wblipp's Avatar
 
"William"
May 2003
New Haven

2×32×131 Posts
Default Optimal ECM Escalation?

Quote:
Originally posted by Bob Silverman
and parameter selection that I helped develop.
...
over 25 years experience working in this domain. I have implemented almost every factoring algorithm there is. I have written papers showing how to improve them.
Speaking of parameter selection, could you or one of the other pro's here tell me if the standard escalation pattern for ECM is really best for all situations? It seems to me there are three situations that might have different answers.

1. The next factor found is likely to finish the composite
For this problem, I think the standard approach of searching long enough to find a factor, on average, if it exists, works well. The transition comes near the point where the expected factors per CPU effort shifts in favor of the next range to test.

2. The next factor is unlikely to finish the composite
For example, composites of several hundred to several thousand digits being searched for 25-40 digit factors. We are almost certain to need to search the next range anyway, so it seems intuitive that we should go there sooner. The reason to hesitate is that every factor found makes the following tests complete faster, so some low value tests can speed the progress - but how many?

3. Future tests will take the same amount of time regardless of factors found
The Special Project works this way because Prime95 has optimizations for large exponents but only works on full Mersenne numbers. (I think that's right - that Prime95 works on the full Mersenne Number and then uses the known factors in the GCD stage.) Here the reasons for doing smaller B1 values would be to avoid finding too many simultaneaous factors that are, themselves, a new factoring challenge, and to cover more starting points.

Is the standard approach really best for all of these situations?
wblipp is offline  
Old 2003-12-08, 13:44   #2
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22×5×373 Posts
Default

See my joint paper with Sam Wagstaff.

It explains everything that you need to know.
R.D. Silverman is offline  
Old 2003-12-08, 19:07   #3
wblipp
 
wblipp's Avatar
 
"William"
May 2003
New Haven

2·32·131 Posts
Default

Quote:
Originally posted by Bob Silverman
See my joint paper with Sam Wagstaff.
That would be A Practical Analysis of the Elliptic Curve Factoring Algorithm by Bob Silverman and Sam Wagstaff, Mathematics of Computation vol. 61, July 1993

It's widely cited on the web, but I've not found the article itself in cyberspace. Fortunately a good university library is nearby.
wblipp is offline  
Old 2003-12-08, 22:44   #4
nitro
 
Feb 2003

2×3×13 Posts
Default

Quote:
Originally posted by Bob Silverman
See my joint paper with Sam Wagstaff.

It explains everything that you need to know.
What a total arrogant p**shead you are! Someone acknowledges that you apparently know a lot about some stuff, asks for your explicit help, yet you simple refer them to a paper.

Is it *beneath* you to actually CONTRIBUTE anything here other than backbiting and snide comments.

If it is then please do us all a favour and SOD OFF. taking your stinking attitude with you.

YOU WON't BE MISSED.
nitro is offline  
Old 2003-12-09, 07:05   #5
TauCeti
 
TauCeti's Avatar
 
Mar 2003
Braunschweig, Germany

2·113 Posts
Default

Calm down, nitro. A pointer to a paper is a big help sometimes and Bob obviously decided not to spend the extra time to summarize the contents. But that is his decision and everyone should be free to decide what to do with his time.

Attitudes are difficult to judge here in a text-only forum. Let's try to keep a civil tongue even if there is strong disagreement about attitudes and/or facts.

Even the smilies start looking depressed...
TauCeti is offline  
Old 2003-12-09, 09:10   #6
Xyzzy
 
Xyzzy's Avatar
 
"Mike"
Aug 2002

170108 Posts
Default

Any chance we can get a PDF of that document, Bob?
Xyzzy is offline  
Old 2003-12-09, 09:46   #7
xilman
Bamboozled!
 
xilman's Avatar
 
"π’‰Ίπ’ŒŒπ’‡·π’†·π’€­"
May 2003
Down not across

72·11·19 Posts
Default

Quote:
Originally posted by nitro
What a total arrogant p**shead you are! Someone acknowledges that you apparently know a lot about some stuff, asks for your explicit help, yet you simple refer them to a paper.

Is it *beneath* you to actually CONTRIBUTE anything here other than backbiting and snide comments.

If it is then please do us all a favour and SOD OFF. taking your stinking attitude with you.

YOU WON't BE MISSED.
Calm down, and don't make yourself look like a boor in public. God. I'm beginning to sound like my mother. It's getting to me, I tell you.

It's my guess that you haven't actually read that paper. I have. If you had read it you would know that it's very densely packed with good mathematics. There is vastly more material there than Bob would want to retype into a forum like this and, believe me, hardly anyone would want to read it here either. As another person has said, you'd want to read it in PDF or perhaps dvi format.

In my opinion, Bob did exactly the right thing. He answered the question by giving a reference to a paper that's easily obtainable in all good university libraries and the paper does, indeed, tell wblipp everything he needs to know (and, I suspect, a great deal more than he needs to know for his purposes).


Paul
xilman is offline  
Old 2003-12-09, 14:59   #8
wblipp
 
wblipp's Avatar
 
"William"
May 2003
New Haven

2×32×131 Posts
Default

Quote:
Originally posted by Xyzzy
Any chance we can get a PDF of that document, Bob?
It's unlikely that Bob can legally do that. The journal owns the copyright. I was able to get the pdf at the local university library (a five minute walk for me), which subscribes to the journal's service. I had to settle for the coarse pdf because it fit on a floppy, but it's readable.

William
wblipp is offline  
Old 2004-04-06, 16:12   #9
wblipp
 
wblipp's Avatar
 
"William"
May 2003
New Haven

93616 Posts
Default

Quote:
Originally Posted by Bob Silverman
See my joint paper with Sam Wagstaff.

It explains everything that you need to know.
Bob,
Can you help me with how the numbers in Table 3 are calculated? I was able to follow the method of using Simpson's rule to numerically evaluate the rho function and can match the values in the Knuth reference to all twelve printed decimal digits. But I haven't figured out how to recreate the probability numbers in Table 3. For example, take the line

D=29
B1=247988
B2=9.99e6
P=1.38e-3

I thought I should calculate
alpha = log(10^28)/log(B1) = 5.191
beta = log(B2)/log(B1) = 1.298

mu(alpha,beta)=1/(a-b) * Int from (a-b) to (a-1) rho(t) dt = 3.46e-4

I thought this mu value would be the p value in the table.

I see two things I haven't used yet. I didn't use the Xi function in this calculation, but I thought that was about the probablity there would be a 29 digit factor of the composite. I also see a note that the table includes the effect of selecting curves so that the group order has a divisor of 12; I haven't figured out how that changes the calculation.

Thanks,
William
wblipp is offline  
Old 2004-05-05, 14:15   #10
R.D. Silverman
 
R.D. Silverman's Avatar
 
Nov 2003

22·5·373 Posts
Thumbs up

Quote:
Originally Posted by wblipp
Bob,
Can you help me with how the numbers in Table 3 are calculated? I was able to follow the method of using Simpson's rule to numerically evaluate the rho function and can match the values in the Knuth reference to all twelve printed decimal digits. But I haven't figured out how to recreate the probability numbers in Table 3. For example, take the line

D=29
B1=247988
B2=9.99e6
P=1.38e-3

I thought I should calculate
alpha = log(10^28)/log(B1) = 5.191
beta = log(B2)/log(B1) = 1.298

mu(alpha,beta)=1/(a-b) * Int from (a-b) to (a-1) rho(t) dt = 3.46e-4

I thought this mu value would be the p value in the table.

I see two things I haven't used yet. I didn't use the Xi function in this calculation, but I thought that was about the probablity there would be a 29 digit factor of the composite. I also see a note that the table includes the effect of selecting curves so that the group order has a divisor of 12; I haven't figured out how that changes the calculation.

Thanks,
William
The computation was indeed done by assuming that the group order
was divisible by 12. Adjust D accordingly. Also, note that 29 digits can
be anywhere from 10^28 +1 to 10^29-1. As I recall, I may have used
the geometric mean approximation ~ sqrt(10) *10^28 as a 'typical'
29 digit number. Now divide this by 12 before computing Dickman's fxn.
R.D. Silverman is offline  
Old 2004-06-30, 19:54   #11
wblipp
 
wblipp's Avatar
 
"William"
May 2003
New Haven

2·32·131 Posts
Default

Bob,

I've been working on this, but I'm still stuck. I'm starting to wonder if there was a typesetting error in the equation for mu.

Let's take a simple example, the first entry in Table 3. This entry is
D=5
B1=16
B2=800
Script_P(B1,B2)=0.456

I'm trying to recreate the last value. Here's what I did:

alpha = log(10^4.5/12)/log(16) = 2.841
beta = log(800)/log(16) = 2.411

(alpha uses the square root of 10 and order divisible by 12)

mu(alpha,beta) = 1/(alpha-beta) * Integral {from (alpha-beta) to (alpha-1)} rho(t) dt

= 1/0.430 * Integral from {0.430 to 1.1841} rho(t) dt

Between zero and 1, rho(t)=1
Between 1 and 2, rho(t) = 1 - ln(t)

so the value of the integral is 1.128
and the mu(alpha, beta) = 2.624

This is an unpleasant value for probability.

I've tried searching for another source for the mu function to see if there is some simple typesetting error. I haven't found any such reference online. Your paper credits reference 6, the Knuth paper, but I can't find the mu function in the Knuth paper, either.

Knuth does have formulas for the probability of the second largest prime with no limit on the largest prime, so I tried to adapt your formula to that situation to see if they agreed. I think that would be equivalent to the largest factor < x^1. In your definition of mu, the largest factor is , x^(beta/alpha), so I think we need to take alpha=beta to make this change. But as beta approaches alpha, the expression for mu increases without bound - another unpleasant property for a probability.

So I'm stuck. As nearly as I can figure out, the table doesn't match the equation, the equation doesn't make sense, and I can't find different equation. Can you see some error in my work? Or any other way to get me unstuck?

William
wblipp is offline  
 

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Optimal LL configuration aurashift Hardware 11 2015-09-22 14:09
Optimal ECM bounds henryzz GMP-ECM 14 2011-06-09 17:04
optimal B1 Mini-Geek Factoring 4 2011-05-27 12:19
Optimal Sieving Time jasong Sierpinski/Riesel Base 5 10 2009-01-25 15:56
optimal parameters for GMP-ECM , -oe+ , -I Walter Nissen GMP-ECM 16 2007-03-20 19:35

All times are UTC. The time now is 17:10.

Fri Sep 18 17:10:42 UTC 2020 up 8 days, 14:21, 1 user, load averages: 2.34, 1.80, 1.67

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.