mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Software (https://www.mersenneforum.org/forumdisplay.php?f=10)
-   -   New factoring breakeven points coming (https://www.mersenneforum.org/showthread.php?t=4213)

Prime95 2005-06-16 20:04

New factoring breakeven points coming
 
Coming in the final version of 24.12 are new trial factoring breakeven points!!

#define FAC72 96830000L
#define FAC71 75670000L
#define FAC70 58520000L
#define FAC69 47450000L
#define FAC68 37800000L
#define FAC67 29690000L
#define FAC66 23390000L

These were calculated on my 2.0 GHz P4. They are much, much higher than before. That is, we will be doing a lot less trial factoring in the future.

There are several reasons these are higher.

The old breakeven points were probably done on my 400 MHz PII. With SSE2 and the P4's fast L2 cache and lots of coding tweaks, LL tests are now a lot faster relative to factoring. That is, a new factor doesn't save as much LL time as it used to.

Also, I've assumed a missed trial factor will be found by P-1 factoring 30% of the time. P-1 factoring didn't even exist when the old breakeven points were calculated.

For comparison, the old breakevens were:

#define FAC72 71000000L
#define FAC71 57020000L
#define FAC70 44150000L
#define FAC69 35200000L
#define FAC68 28130000L
#define FAC67 21590000L
#define FAC66 17890000L

akruppa 2005-06-16 20:43

How about making the thresholds depend on p (mod 120)? For example, for candidate factors p==1 (mod 120), we know that 120|p-1, giving P-1 a much higher chance of recovering such factors if missed by trial division. Otoh, with p==119 (mod 120), p-1 has no prime factors <5 except a single 2, so these are poor candidates for P-1 and could be trial divided higher.

So for each class r (mod 120), compute the limit where for a prime p==r (mod 120) the probability of (1.) not being covered by P-1, and (2.) dividing the Mersenne number is better than some threshold. (1.) depends on r and the size of p, (2.) only on the size of p. (1.) can be computed accurately, I've played with problems like that recently.

It could be worthwhile to treat more separate classes, say 96 classes (mod 840), or even 960 classes (mod 9240), so knowledge that 7 or 11, resp., divides p-1 can be used for calculating the trial factoring limit for that class.

Alex :akruppa:

Prime95 2005-06-16 20:57

Now that is a clever idea!

lycorn 2005-06-16 22:46

I was also wondering if the type of machine shouldn´t be taken into consideration. For example, an Ath64, running the new 64-bit version of the client, is a lot more effective at Trial Factoring than at LL, and it is much better for Trial Factoring than most P4s. So, at least for the Ath64, don´t you think that it would pay setting the breakeven points lower?

axn 2005-06-17 14:26

[QUOTE=Prime95]Now that is a clever idea![/QUOTE]
Why not go all the way, and eliminate smooth p-1's from TF altogether? Using some additional sieving, smooth p's could be quickly identified and eliminated, resulting in 30% fewer candidates to be checked by TF (of course, all of this assumes that P-1 will be run without fail after TF).

akruppa 2005-06-18 09:57

anx1: I spent a bit of time trying to think of a way how to make that efficient but never found something. The "problem" is that actually testing a candidate factor by trial division takes so little time, thus elaborate schemes of eliminating candidates usually turn out slower than simply testing them. Treating the 16 classes (mod 120) (or whichever many mod whatever) has the benefit of adding *very* little overhead.

Alex

axn 2005-06-18 13:57

[QUOTE=akruppa]actually testing a candidate factor by trial division takes so little time, thus elaborate schemes of eliminating candidates usually turn out slower than simply testing them.[/QUOTE]I don't have any hard numbers on this. But I assume that actual trial division still dominates the computation time compared to sieving (> 90% ?). In such a scenario, if we could reduce the trial division effort by 20-30% while *only* doubling the sieve cost, we can, at the very least, break-even.

Some practical considerations: Some kind of "approximate logarithm" sieve can be used for the purpose. Sieving will be done with all primes and prime powers less than B1 value only. If what is left behind is less than B2, a candidate would be considered smooth. If it takes too long to sieve with the current B1 values, we could use a lesser value, say 100K.

[QUOTE=akruppa]Treating the 16 classes (mod 120) (or whichever many mod whatever) has the benefit of adding *very* little overhead.[/QUOTE]The smoothness test can be used in addition to your method. In an ideal scenario, once both the methods are implemented, one would expect to see approximately equal number of candidates tested per class.

I hope I am making some sense :confused:

axn 2005-06-18 15:09

[QUOTE=axn1]equal number of candidates tested per class[/QUOTE]That should be "equal number of candidates *eliminated* per class".

garo 2005-06-22 17:19

How will the Primenet server deal with all this? I remember that there were some problems with the server when the 66 bit limit was changed from 17850000 to 17890000 (or vice-versa). Will the server not hand back numbers done to the new limits and expect them to be factored again?

Prime95 2005-06-22 22:26

The server was changed to assume all clients use the v24 factoring limits. v23 and earlier clients will factor deeper, but that won't cause the server any problems

garo 2005-06-23 06:38

Thanks George. That puts my mind to rest. :w00t: I'm aware that deeper factoring will not cause any problems.


All times are UTC. The time now is 05:13.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.