mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Math (https://www.mersenneforum.org/forumdisplay.php?f=8)
-   -   Recommended TF bit levels for M(>10^8) (https://www.mersenneforum.org/showthread.php?t=10886)

NBtarheel_33 2008-10-31 10:14

Recommended TF bit levels for M(>10^8)
 
Hi,

Wondering if anyone knows around what bit level we should TF up to for LMH-sized numbers. I think I remember seeing something like 2^72 for numbers around M100,000,000, but I'm thinking about even bigger numbers - for instance, I was looking at numbers around M867530900 ("Jenny numbers"!) - it seems like it might be wise to go almost up to 80 bits here? I recently also worked on M999999937 (biggest number PrimeNet lists), factoring up to 2^73 - that took the better part of 24 hours, but considering that the LL test timed out to almost 70 years, I guess it's worthwhile to go much higher. Where should we start P-1'ing numbers this huge?

S485122 2008-10-31 11:23

The values I have (from the sources of v24 and v25) are :[code]Bits up to Exponent
56 1 000 000
57 1 480 000
58 1 930 000
59 2 360 000
60 2 950 000
61 3 960 000
62 5 160 000
63 6 515 000
64 8 250 000
65 13 380 000
66 23 390 000
67 29 690 000
68 37 800 000
69 47 450 000
70 58 520 000
71 75 670 000
72 96 830 000
73 115 300 000
74 147 500 000
75 186 400 000
76 227 300 000
77 264 600 000
78 337 400 000
79 420 400 000
80 516 000 000[/code]In the source code :[code]
#define FAC80 516000000L
#define FAC79 420400000L
#define FAC78 337400000L
#define FAC77 264600000L
#define FAC76 227300000L
#define FAC75 186400000L
#define FAC74 147500000L
#define FAC73 115300000L
#define FAC72 96830000L
#define FAC71 75670000L
#define FAC70 58520000L
#define FAC69 47450000L
#define FAC68 37800000L
#define FAC67 29690000L
#define FAC66 23390000L

/* These breakevens we're calculated a long time ago on unknown hardware: */

#define FAC65 13380000L
#define FAC64 8250000L
#define FAC63 6515000L
#define FAC62 5160000L
#define FAC61 3960000L
#define FAC60 2950000L
#define FAC59 2360000L
#define FAC58 1930000L
#define FAC57 1480000L
#define FAC56 1000000L
[/code]This would mean that exponents above 516M should be factored to 81 bits at least.

Jacob

NBtarheel_33 2008-10-31 11:50

That's just what I was looking for
 
Thanks, Jacob. This should almost be made into a sticky, at least in the LMH forum. It makes sense that there's no information about numbers over M5xx,000,000; I believe the current version of Prime95 will not do an LL test over M596,000,000.

You really gain an appreciation for just how expensive LL testing on these kinds of numbers is when you consider that weeks, or even *months* of trial factoring makes good sense. Like on M999,999,937, I took a day to get to 73, so it would be roughly 2+4+8+16+32+64+128+256 = 510 days, or just under 1 1/2 *YEARS* :surprised just on trial factoring to 81 bits (and I bet M999,999,937 ought to go out to 84 or 85 bits, really). Wow. Then again 1 1/2 years factoring is just a brief prelude to the 70-80 year LL test my computer estimated for M999,999,937. I think I'll stick with my 30M exponents for now.

Thanks again for the info!

jinydu 2008-11-01 03:35

[QUOTE=NBtarheel_33;147349]I was looking at numbers around M867530900 ("Jenny numbers"!)[/QUOTE]

Where did that name come from?

Uncwilly 2008-11-01 04:10

[quote=jinydu;147437]Where did that name come from?[/quote]
:google:
Jenny 867-5309

[URL="http://graphjam.com/2008/07/29/song-chart-memes-number-of-calls-per-year-to-867-5309/"]
[/URL]

Batalov 2008-11-01 07:40

This page is virus-infected, did you know that?

jinydu 2008-11-01 09:10

[QUOTE=NBtarheel_33;147356]Then again 1 1/2 years factoring is just a brief prelude to the 70-80 year LL test my computer estimated for M999,999,937. I think I'll stick with my 30M exponents for now.

Thanks again for the info![/QUOTE]

Funny. Prime95 v24.14 at least doesn't seem to give estimates beyond 30 years (it just says estimated completion time after 2038).

NBtarheel_33 2008-11-01 09:32

LL testing time
 
I should have elaborated on how I got 70-80 years. I used the Advanced -> Time option in Prime95 v25.7 to run several 1000 iteration tests of M999,999,937. From the average iteration time, I extrapolated out to 999,999,937 iterations requiring somewhere between 70 and 80 years.

You're right that Prime95 will not display completion times beyond 2038 (actually, beyond whatever second marks the Unix time rollover - 2,147,483,647 seconds after midnight on January 1, 1970 - hint: what's the largest value that'll fit in a 32-bit integer?). If you look in prime.ini (I think), you'll see references to estimated completion times given in Unix time.

WRT the link, I actually went there and watched the video, and didn't get virusized. Hope there's not now something going on in my computer that I'm not aware of...

ET_ 2008-11-01 12:38

[QUOTE=jinydu;147446]Funny. Prime95 v24.14 at least doesn't seem to give estimates beyond 30 years (it just says estimated completion time after 2038).[/QUOTE]

There is a computer bug hidden for January 2038, as there was one for Y2K.

Luigi

retina 2008-11-01 12:46

[QUOTE=ET_;147455]There is a computer bug hidden for January 2038, as there was one for Y2K.[/QUOTE]Don't forget to check out all the other potential problems also. [url=http://en.wikipedia.org/wiki/Time_formatting_and_storage_bugs]link[/url]. My favourite one is "The year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem".

Uncwilly 2008-11-01 17:04

[QUOTE=Batalov;147444]This page is virus-infected, did you know that?[/QUOTE]I didn't until later. The problem did crop up for me until about 10 pages in, on that site. Thanks Garo. I don't want to be the cause of a problem.

ATH 2008-11-01 18:22

[QUOTE=retina;147457]Don't forget to check out all the other potential problems also. [url=http://en.wikipedia.org/wiki/Time_formatting_and_storage_bugs]link[/url]. My favourite one is "The year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem".[/QUOTE]

ROFL. OMG my computer is not ready for the year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem. I'm not home that year, I allready made plans. I have to find someone to come by and restart prime95 :)

R. Gerbicz 2008-11-01 18:34

The TF bit level for Mn is about k=log(n)*3/log(2).

ckdo 2008-11-01 20:05

[quote=R. Gerbicz;147481]The TF bit level for Mn is about k=log(n)*3/log(2).[/quote]

Gives k=76.499714254 for n=47450000 and k=77.407279301 for n=58520000.

Only off by 7 bits from the tabulated values... :tu:

ATH 2008-11-01 22:40

If you look at the data from 66 bit to 80 bit it can be fitted with:

bitdepth = 22.94*exponent[sup]0.0623[/sup]

or

bitdepth = 10.4428*log(exponent) - 11.15

S00113 2008-11-02 13:15

I wonder what hardware those bit levels were calculated on. I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing. One factor found by trial factoring saves two LL tests. For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test. P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z. I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.

S485122 2008-11-02 15:43

[QUOTE=S00113;147561]Perhaps the limits should be different for each computer based on CPU and RAM speed.[/QUOTE]The problem is that the person doing the TF might be different from the one doing the P-1 and still a different person might do the LL test, the double check should be done by another person than the one -that has done the LL test.

What could be done though is assign TF up to 63 bits to AMD64s and Intel 64 CPUs. Because at those depths these CPU/software combination are really making a difference. But again is it worth the complexity ?

Concerning P-1 I am of the opinion that all exponents should be done to their ideal level, not a level determined by the available memory. When you receive doublechecks some are done to limits half that of others, some exponent did not even have a stage 2 done... The problem from PrimeNet's point of view is that the software would not be as invisible (trying to grab to much memory and trying a stage 2 with only 8MB of memory assigned, well...)

Jacob

cheesehead 2008-11-03 02:04

[quote=S00113;147561]I wonder what hardware those bit levels were calculated on.[/quote]The levels aren't very sensitive to hardware type. Modest differences in TF efficiency across hardware don't make much difference when the steps are powers of 2.

But since you asked, here's the relevant part of v25.2 source module commonc.h:

(Answer: 2.0 GHz P4 Northwood)

[code]/* Factoring limits based on complex formulas given the speed of the */
/* factoring code vs. the speed of the Lucas-Lehmer code */
/* As an example, examine factoring to 2^68 (finding all 68-bit factors). */
/* First benchmark a machine to get LL iteration times and trial factoring */
/* times for a (16KB sieve of p=35000011). */
/* We want to find when time spend eliminating an exponent with */
/* trial factoring equals time saved running 2 LL tests. */

/* runs to find a factor (68) *
#16KB sections (2^68-2^67)/p/(120/16)/(16*1024*8) *
factoring_benchmark = 2.0 * LL test time (p * ll_benchmark)

simplifying:

68 * (2^68-2^67)/p/(120/16)/(16*1024*8) * facbench = 2 * p * llbench
68 * 2^67 / p / (120/16) / 2^17 * facbench = 2 * p * lltime
68 * 2^49 / p / (120/16) * facbench = p * lltime
68 * 2^49 / (120/16) * facbench = p^2 * lltime
68 * 2^53 / 120 * facbench = p^2 * lltime
68 * 2^53 / 120 * facbench / lltime = p^2
sqrt (68 * 2^53 / 120 * facbench / lltime) = p
*/

/* Now lets assume 30% of these factors would have been found by P-1. So
we only save a relatively quick P-1 test instead 2 LL tests. Thus:
sqrt (68 / 0.7 * 2^53 / 120 * facbench / lltime) = p
*/

/* Now factor in that 35000000 does 19 squarings, but 70000000 requires 20.
Thus, if maxp is the maximum exponent that can be handled by an FFT size:
sqrt (68 / 0.7 * 2^53 / 120 *
facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime) = p
*/

/* Now factor in that errors sometimes force us to run more than 2 LL tests.
Assume, 2.04 on average:
sqrt (68 / 0.7 * 2^53 / 120 *
facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime / 1.02) = p
*/

/* These breakeven points we're calculated on a 2.0 GHz P4 Northwood: */

#define FAC80 516000000L
#define FAC79 420400000L
#define FAC78 337400000L
#define FAC77 264600000L
#define FAC76 227300000L
#define FAC75 186400000L
#define FAC74 147500000L
#define FAC73 115300000L
#define FAC72 96830000L
#define FAC71 75670000L
#define FAC70 58520000L
#define FAC69 47450000L
#define FAC68 37800000L
#define FAC67 29690000L
#define FAC66 23390000L

/* These breakevens we're calculated a long time ago on unknown hardware: */

#define FAC65 13380000L
#define FAC64 8250000L
#define FAC63 6515000L
#define FAC62 5160000L
#define FAC61 3960000L
#define FAC60 2950000L
#define FAC59 2360000L
#define FAC58 1930000L
#define FAC57 1480000L
#define FAC56 1000000L
[/code]

[quote]I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing.[/quote]Do you mean that AMD64 is relatively faster than Intel at TF, but not so different at LL?

[quote]One factor found by trial factoring saves two LL tests.[/quote]... or one P-1 test.

[quote]For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test.[/quote]Since each successive bit level doubles TF time, there has to be quite a differential to justify it. Will you run the numbers for us?

[quote]P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z.[/quote]Note the 30% rule-of-thumb in the code comments.

[quote]I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.[/quote]Prime95 already computes, based on CPU type and amount of RAM, the optimum B1 and B2. That's why you can see that some adjacent exponents that have already been P-1'd have had notably different B1/B2. Example: 40000217 has been P-1'd to 650000,650000, while 40000231 has been P-1'd to 410000,3382500.

ckdo 2008-11-03 07:30

[quote=S485122;147354]The values I have (from the sources of v24 and v25) are :[code]Bits up to Exponent
56 1 000 000
57 1 480 000
58 1 930 000
59 2 360 000
60 2 950 000
61 3 960 000
62 5 160 000
63 6 515 000
64 8 250 000
65 13 380 000
66 23 390 000
67 29 690 000
68 37 800 000
69 47 450 000
70 58 520 000
71 75 670 000
72 96 830 000
73 115 300 000
74 147 500 000
75 186 400 000
76 227 300 000
77 264 600 000
78 337 400 000
79 420 400 000
80 516 000 000[/code][...]

This would mean that exponents above 516M should be factored to 81 bits at least.
[/quote]You are actually off by one bit. The right column should be labeled "Starting at exponent"...

S485122 2008-11-03 17:19

Indeed. I goofed up there.

Jacob


All times are UTC. The time now is 23:24.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.