![]() |
![]() |
#1 |
P90 years forever!
Aug 2002
Yeehaw, FL
11100011111102 Posts |
![]()
Coming in the final version of 24.12 are new trial factoring breakeven points!!
#define FAC72 96830000L #define FAC71 75670000L #define FAC70 58520000L #define FAC69 47450000L #define FAC68 37800000L #define FAC67 29690000L #define FAC66 23390000L These were calculated on my 2.0 GHz P4. They are much, much higher than before. That is, we will be doing a lot less trial factoring in the future. There are several reasons these are higher. The old breakeven points were probably done on my 400 MHz PII. With SSE2 and the P4's fast L2 cache and lots of coding tweaks, LL tests are now a lot faster relative to factoring. That is, a new factor doesn't save as much LL time as it used to. Also, I've assumed a missed trial factor will be found by P-1 factoring 30% of the time. P-1 factoring didn't even exist when the old breakeven points were calculated. For comparison, the old breakevens were: #define FAC72 71000000L #define FAC71 57020000L #define FAC70 44150000L #define FAC69 35200000L #define FAC68 28130000L #define FAC67 21590000L #define FAC66 17890000L |
![]() |
![]() |
![]() |
#2 |
"Nancy"
Aug 2002
Alexandria
2,467 Posts |
![]()
How about making the thresholds depend on p (mod 120)? For example, for candidate factors p==1 (mod 120), we know that 120|p-1, giving P-1 a much higher chance of recovering such factors if missed by trial division. Otoh, with p==119 (mod 120), p-1 has no prime factors <5 except a single 2, so these are poor candidates for P-1 and could be trial divided higher.
So for each class r (mod 120), compute the limit where for a prime p==r (mod 120) the probability of (1.) not being covered by P-1, and (2.) dividing the Mersenne number is better than some threshold. (1.) depends on r and the size of p, (2.) only on the size of p. (1.) can be computed accurately, I've played with problems like that recently. It could be worthwhile to treat more separate classes, say 96 classes (mod 840), or even 960 classes (mod 9240), so knowledge that 7 or 11, resp., divides p-1 can be used for calculating the trial factoring limit for that class. Alex :akruppa: Last fiddled with by akruppa on 2005-06-16 at 20:45 |
![]() |
![]() |
![]() |
#3 |
P90 years forever!
Aug 2002
Yeehaw, FL
2×7×521 Posts |
![]()
Now that is a clever idea!
|
![]() |
![]() |
![]() |
#4 |
Sep 2002
Oeiras, Portugal
25·32·5 Posts |
![]()
I was also wondering if the type of machine shouldn´t be taken into consideration. For example, an Ath64, running the new 64-bit version of the client, is a lot more effective at Trial Factoring than at LL, and it is much better for Trial Factoring than most P4s. So, at least for the Ath64, don´t you think that it would pay setting the breakeven points lower?
|
![]() |
![]() |
![]() |
#5 | |
Jun 2003
113528 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#6 |
"Nancy"
Aug 2002
Alexandria
9A316 Posts |
![]()
anx1: I spent a bit of time trying to think of a way how to make that efficient but never found something. The "problem" is that actually testing a candidate factor by trial division takes so little time, thus elaborate schemes of eliminating candidates usually turn out slower than simply testing them. Treating the 16 classes (mod 120) (or whichever many mod whatever) has the benefit of adding *very* little overhead.
Alex |
![]() |
![]() |
![]() |
#7 | ||
Jun 2003
10010111010102 Posts |
![]() Quote:
Some practical considerations: Some kind of "approximate logarithm" sieve can be used for the purpose. Sieving will be done with all primes and prime powers less than B1 value only. If what is left behind is less than B2, a candidate would be considered smooth. If it takes too long to sieve with the current B1 values, we could use a lesser value, say 100K. Quote:
I hope I am making some sense ![]() |
||
![]() |
![]() |
![]() |
#8 | |
Jun 2003
113528 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#9 |
Aug 2002
Termonfeckin, IE
3×919 Posts |
![]()
How will the Primenet server deal with all this? I remember that there were some problems with the server when the 66 bit limit was changed from 17850000 to 17890000 (or vice-versa). Will the server not hand back numbers done to the new limits and expect them to be factored again?
|
![]() |
![]() |
![]() |
#10 |
P90 years forever!
Aug 2002
Yeehaw, FL
2·7·521 Posts |
![]()
The server was changed to assume all clients use the v24 factoring limits. v23 and earlier clients will factor deeper, but that won't cause the server any problems
|
![]() |
![]() |
![]() |
#11 |
Aug 2002
Termonfeckin, IE
3×919 Posts |
![]()
Thanks George. That puts my mind to rest.
![]() |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
SSL is coming - prepare... | Madpoo | PrimeNet | 71 | 2018-09-06 03:39 |
TF vs. LL Where are the breakeven points for MMs? | aketilander | Operazione Doppi Mersennes | 6 | 2012-11-04 12:56 |
Big milestone coming up | schickel | Aliquot Sequences | 8 | 2011-07-29 10:54 |
Elliptic factoring with points *NOT* on the curve | bongomongo | Factoring | 5 | 2006-12-21 18:19 |
And the hits just keep on coming..... | R.D. Silverman | Factoring | 13 | 2005-10-04 10:02 |