20031206, 17:39  #1  
"William"
May 2003
New Haven
2×3^{2}×131 Posts 
Optimal ECM Escalation?
Quote:
1. The next factor found is likely to finish the composite For this problem, I think the standard approach of searching long enough to find a factor, on average, if it exists, works well. The transition comes near the point where the expected factors per CPU effort shifts in favor of the next range to test. 2. The next factor is unlikely to finish the composite For example, composites of several hundred to several thousand digits being searched for 2540 digit factors. We are almost certain to need to search the next range anyway, so it seems intuitive that we should go there sooner. The reason to hesitate is that every factor found makes the following tests complete faster, so some low value tests can speed the progress  but how many? 3. Future tests will take the same amount of time regardless of factors found The Special Project works this way because Prime95 has optimizations for large exponents but only works on full Mersenne numbers. (I think that's right  that Prime95 works on the full Mersenne Number and then uses the known factors in the GCD stage.) Here the reasons for doing smaller B1 values would be to avoid finding too many simultaneaous factors that are, themselves, a new factoring challenge, and to cover more starting points. Is the standard approach really best for all of these situations? 

20031208, 13:44  #2 
Nov 2003
2^{2}×5×373 Posts 
See my joint paper with Sam Wagstaff.
It explains everything that you need to know. 
20031208, 19:07  #3  
"William"
May 2003
New Haven
2·3^{2}·131 Posts 
Quote:
It's widely cited on the web, but I've not found the article itself in cyberspace. Fortunately a good university library is nearby. 

20031208, 22:44  #4  
Feb 2003
2×3×13 Posts 
Quote:
Is it *beneath* you to actually CONTRIBUTE anything here other than backbiting and snide comments. If it is then please do us all a favour and SOD OFF. taking your stinking attitude with you. YOU WON't BE MISSED. 

20031209, 07:05  #5 
Mar 2003
Braunschweig, Germany
2·113 Posts 
Calm down, nitro. A pointer to a paper is a big help sometimes and Bob obviously decided not to spend the extra time to summarize the contents. But that is his decision and everyone should be free to decide what to do with his time.
Attitudes are difficult to judge here in a textonly forum. Let's try to keep a civil tongue even if there is strong disagreement about attitudes and/or facts. Even the smilies start looking depressed... 
20031209, 09:10  #6 
"Mike"
Aug 2002
17010_{8} Posts 
Any chance we can get a PDF of that document, Bob?

20031209, 09:46  #7  
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
7^{2}·11·19 Posts 
Quote:
It's my guess that you haven't actually read that paper. I have. If you had read it you would know that it's very densely packed with good mathematics. There is vastly more material there than Bob would want to retype into a forum like this and, believe me, hardly anyone would want to read it here either. As another person has said, you'd want to read it in PDF or perhaps dvi format. In my opinion, Bob did exactly the right thing. He answered the question by giving a reference to a paper that's easily obtainable in all good university libraries and the paper does, indeed, tell wblipp everything he needs to know (and, I suspect, a great deal more than he needs to know for his purposes). Paul 

20031209, 14:59  #8  
"William"
May 2003
New Haven
2×3^{2}×131 Posts 
Quote:
William 

20040406, 16:12  #9  
"William"
May 2003
New Haven
936_{16} Posts 
Quote:
Can you help me with how the numbers in Table 3 are calculated? I was able to follow the method of using Simpson's rule to numerically evaluate the rho function and can match the values in the Knuth reference to all twelve printed decimal digits. But I haven't figured out how to recreate the probability numbers in Table 3. For example, take the line D=29 B1=247988 B2=9.99e6 P=1.38e3 I thought I should calculate alpha = log(10^28)/log(B1) = 5.191 beta = log(B2)/log(B1) = 1.298 mu(alpha,beta)=1/(ab) * Int from (ab) to (a1) rho(t) dt = 3.46e4 I thought this mu value would be the p value in the table. I see two things I haven't used yet. I didn't use the Xi function in this calculation, but I thought that was about the probablity there would be a 29 digit factor of the composite. I also see a note that the table includes the effect of selecting curves so that the group order has a divisor of 12; I haven't figured out how that changes the calculation. Thanks, William 

20040505, 14:15  #10  
Nov 2003
2^{2}·5·373 Posts 
Quote:
was divisible by 12. Adjust D accordingly. Also, note that 29 digits can be anywhere from 10^28 +1 to 10^291. As I recall, I may have used the geometric mean approximation ~ sqrt(10) *10^28 as a 'typical' 29 digit number. Now divide this by 12 before computing Dickman's fxn. 

20040630, 19:54  #11 
"William"
May 2003
New Haven
2·3^{2}·131 Posts 
Bob,
I've been working on this, but I'm still stuck. I'm starting to wonder if there was a typesetting error in the equation for mu. Let's take a simple example, the first entry in Table 3. This entry is D=5 B1=16 B2=800 Script_P(B1,B2)=0.456 I'm trying to recreate the last value. Here's what I did: alpha = log(10^4.5/12)/log(16) = 2.841 beta = log(800)/log(16) = 2.411 (alpha uses the square root of 10 and order divisible by 12) mu(alpha,beta) = 1/(alphabeta) * Integral {from (alphabeta) to (alpha1)} rho(t) dt = 1/0.430 * Integral from {0.430 to 1.1841} rho(t) dt Between zero and 1, rho(t)=1 Between 1 and 2, rho(t) = 1  ln(t) so the value of the integral is 1.128 and the mu(alpha, beta) = 2.624 This is an unpleasant value for probability. I've tried searching for another source for the mu function to see if there is some simple typesetting error. I haven't found any such reference online. Your paper credits reference 6, the Knuth paper, but I can't find the mu function in the Knuth paper, either. Knuth does have formulas for the probability of the second largest prime with no limit on the largest prime, so I tried to adapt your formula to that situation to see if they agreed. I think that would be equivalent to the largest factor < x^1. In your definition of mu, the largest factor is , x^(beta/alpha), so I think we need to take alpha=beta to make this change. But as beta approaches alpha, the expression for mu increases without bound  another unpleasant property for a probability. So I'm stuck. As nearly as I can figure out, the table doesn't match the equation, the equation doesn't make sense, and I can't find different equation. Can you see some error in my work? Or any other way to get me unstuck? William 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Optimal LL configuration  aurashift  Hardware  11  20150922 14:09 
Optimal ECM bounds  henryzz  GMPECM  14  20110609 17:04 
optimal B1  MiniGeek  Factoring  4  20110527 12:19 
Optimal Sieving Time  jasong  Sierpinski/Riesel Base 5  10  20090125 15:56 
optimal parameters for GMPECM , oe+ , I  Walter Nissen  GMPECM  16  20070320 19:35 