mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Math (https://www.mersenneforum.org/forumdisplay.php?f=8)
-   -   Recommended TF bit levels for M(>10^8) (https://www.mersenneforum.org/showthread.php?t=10886)

ATH 2008-11-01 18:22

[QUOTE=retina;147457]Don't forget to check out all the other potential problems also. [url=http://en.wikipedia.org/wiki/Time_formatting_and_storage_bugs]link[/url]. My favourite one is "The year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem".[/QUOTE]

ROFL. OMG my computer is not ready for the year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem. I'm not home that year, I allready made plans. I have to find someone to come by and restart prime95 :)

R. Gerbicz 2008-11-01 18:34

The TF bit level for Mn is about k=log(n)*3/log(2).

ckdo 2008-11-01 20:05

[quote=R. Gerbicz;147481]The TF bit level for Mn is about k=log(n)*3/log(2).[/quote]

Gives k=76.499714254 for n=47450000 and k=77.407279301 for n=58520000.

Only off by 7 bits from the tabulated values... :tu:

ATH 2008-11-01 22:40

If you look at the data from 66 bit to 80 bit it can be fitted with:

bitdepth = 22.94*exponent[sup]0.0623[/sup]

or

bitdepth = 10.4428*log(exponent) - 11.15

S00113 2008-11-02 13:15

I wonder what hardware those bit levels were calculated on. I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing. One factor found by trial factoring saves two LL tests. For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test. P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z. I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.

S485122 2008-11-02 15:43

[QUOTE=S00113;147561]Perhaps the limits should be different for each computer based on CPU and RAM speed.[/QUOTE]The problem is that the person doing the TF might be different from the one doing the P-1 and still a different person might do the LL test, the double check should be done by another person than the one -that has done the LL test.

What could be done though is assign TF up to 63 bits to AMD64s and Intel 64 CPUs. Because at those depths these CPU/software combination are really making a difference. But again is it worth the complexity ?

Concerning P-1 I am of the opinion that all exponents should be done to their ideal level, not a level determined by the available memory. When you receive doublechecks some are done to limits half that of others, some exponent did not even have a stage 2 done... The problem from PrimeNet's point of view is that the software would not be as invisible (trying to grab to much memory and trying a stage 2 with only 8MB of memory assigned, well...)

Jacob

cheesehead 2008-11-03 02:04

[quote=S00113;147561]I wonder what hardware those bit levels were calculated on.[/quote]The levels aren't very sensitive to hardware type. Modest differences in TF efficiency across hardware don't make much difference when the steps are powers of 2.

But since you asked, here's the relevant part of v25.2 source module commonc.h:

(Answer: 2.0 GHz P4 Northwood)

[code]/* Factoring limits based on complex formulas given the speed of the */
/* factoring code vs. the speed of the Lucas-Lehmer code */
/* As an example, examine factoring to 2^68 (finding all 68-bit factors). */
/* First benchmark a machine to get LL iteration times and trial factoring */
/* times for a (16KB sieve of p=35000011). */
/* We want to find when time spend eliminating an exponent with */
/* trial factoring equals time saved running 2 LL tests. */

/* runs to find a factor (68) *
#16KB sections (2^68-2^67)/p/(120/16)/(16*1024*8) *
factoring_benchmark = 2.0 * LL test time (p * ll_benchmark)

simplifying:

68 * (2^68-2^67)/p/(120/16)/(16*1024*8) * facbench = 2 * p * llbench
68 * 2^67 / p / (120/16) / 2^17 * facbench = 2 * p * lltime
68 * 2^49 / p / (120/16) * facbench = p * lltime
68 * 2^49 / (120/16) * facbench = p^2 * lltime
68 * 2^53 / 120 * facbench = p^2 * lltime
68 * 2^53 / 120 * facbench / lltime = p^2
sqrt (68 * 2^53 / 120 * facbench / lltime) = p
*/

/* Now lets assume 30% of these factors would have been found by P-1. So
we only save a relatively quick P-1 test instead 2 LL tests. Thus:
sqrt (68 / 0.7 * 2^53 / 120 * facbench / lltime) = p
*/

/* Now factor in that 35000000 does 19 squarings, but 70000000 requires 20.
Thus, if maxp is the maximum exponent that can be handled by an FFT size:
sqrt (68 / 0.7 * 2^53 / 120 *
facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime) = p
*/

/* Now factor in that errors sometimes force us to run more than 2 LL tests.
Assume, 2.04 on average:
sqrt (68 / 0.7 * 2^53 / 120 *
facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime / 1.02) = p
*/

/* These breakeven points we're calculated on a 2.0 GHz P4 Northwood: */

#define FAC80 516000000L
#define FAC79 420400000L
#define FAC78 337400000L
#define FAC77 264600000L
#define FAC76 227300000L
#define FAC75 186400000L
#define FAC74 147500000L
#define FAC73 115300000L
#define FAC72 96830000L
#define FAC71 75670000L
#define FAC70 58520000L
#define FAC69 47450000L
#define FAC68 37800000L
#define FAC67 29690000L
#define FAC66 23390000L

/* These breakevens we're calculated a long time ago on unknown hardware: */

#define FAC65 13380000L
#define FAC64 8250000L
#define FAC63 6515000L
#define FAC62 5160000L
#define FAC61 3960000L
#define FAC60 2950000L
#define FAC59 2360000L
#define FAC58 1930000L
#define FAC57 1480000L
#define FAC56 1000000L
[/code]

[quote]I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing.[/quote]Do you mean that AMD64 is relatively faster than Intel at TF, but not so different at LL?

[quote]One factor found by trial factoring saves two LL tests.[/quote]... or one P-1 test.

[quote]For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test.[/quote]Since each successive bit level doubles TF time, there has to be quite a differential to justify it. Will you run the numbers for us?

[quote]P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z.[/quote]Note the 30% rule-of-thumb in the code comments.

[quote]I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.[/quote]Prime95 already computes, based on CPU type and amount of RAM, the optimum B1 and B2. That's why you can see that some adjacent exponents that have already been P-1'd have had notably different B1/B2. Example: 40000217 has been P-1'd to 650000,650000, while 40000231 has been P-1'd to 410000,3382500.

ckdo 2008-11-03 07:30

[quote=S485122;147354]The values I have (from the sources of v24 and v25) are :[code]Bits up to Exponent
56 1 000 000
57 1 480 000
58 1 930 000
59 2 360 000
60 2 950 000
61 3 960 000
62 5 160 000
63 6 515 000
64 8 250 000
65 13 380 000
66 23 390 000
67 29 690 000
68 37 800 000
69 47 450 000
70 58 520 000
71 75 670 000
72 96 830 000
73 115 300 000
74 147 500 000
75 186 400 000
76 227 300 000
77 264 600 000
78 337 400 000
79 420 400 000
80 516 000 000[/code][...]

This would mean that exponents above 516M should be factored to 81 bits at least.
[/quote]You are actually off by one bit. The right column should be labeled "Starting at exponent"...

S485122 2008-11-03 17:19

Indeed. I goofed up there.

Jacob


All times are UTC. The time now is 03:53.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.