mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Math

Reply
 
Thread Tools
Old 2008-11-01, 18:22   #12
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

C6A16 Posts
Default

Quote:
Originally Posted by retina View Post
Don't forget to check out all the other potential problems also. link. My favourite one is "The year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem".
ROFL. OMG my computer is not ready for the year 170,141,183,460,469,231,731,687,303,715,884,105,727 problem. I'm not home that year, I allready made plans. I have to find someone to come by and restart prime95 :)
ATH is offline   Reply With Quote
Old 2008-11-01, 18:34   #13
R. Gerbicz
 
R. Gerbicz's Avatar
 
"Robert Gerbicz"
Oct 2005
Hungary

2×32×83 Posts
Default

The TF bit level for Mn is about k=log(n)*3/log(2).
R. Gerbicz is offline   Reply With Quote
Old 2008-11-01, 20:05   #14
ckdo
 
ckdo's Avatar
 
Dec 2007
Cleves, Germany

2×5×53 Posts
Default

Quote:
Originally Posted by R. Gerbicz View Post
The TF bit level for Mn is about k=log(n)*3/log(2).
Gives k=76.499714254 for n=47450000 and k=77.407279301 for n=58520000.

Only off by 7 bits from the tabulated values...
ckdo is offline   Reply With Quote
Old 2008-11-01, 22:40   #15
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

C6A16 Posts
Default

If you look at the data from 66 bit to 80 bit it can be fitted with:

bitdepth = 22.94*exponent0.0623

or

bitdepth = 10.4428*log(exponent) - 11.15
ATH is offline   Reply With Quote
Old 2008-11-02, 13:15   #16
S00113
 
S00113's Avatar
 
Dec 2003

21610 Posts
Default

I wonder what hardware those bit levels were calculated on. I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing. One factor found by trial factoring saves two LL tests. For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test. P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z. I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.
S00113 is offline   Reply With Quote
Old 2008-11-02, 15:43   #17
S485122
 
S485122's Avatar
 
"Jacob"
Sep 2006
Brussels, Belgium

5·349 Posts
Default

Quote:
Originally Posted by S00113 View Post
Perhaps the limits should be different for each computer based on CPU and RAM speed.
The problem is that the person doing the TF might be different from the one doing the P-1 and still a different person might do the LL test, the double check should be done by another person than the one -that has done the LL test.

What could be done though is assign TF up to 63 bits to AMD64s and Intel 64 CPUs. Because at those depths these CPU/software combination are really making a difference. But again is it worth the complexity ?

Concerning P-1 I am of the opinion that all exponents should be done to their ideal level, not a level determined by the available memory. When you receive doublechecks some are done to limits half that of others, some exponent did not even have a stage 2 done... The problem from PrimeNet's point of view is that the software would not be as invisible (trying to grab to much memory and trying a stage 2 with only 8MB of memory assigned, well...)

Jacob
S485122 is online now   Reply With Quote
Old 2008-11-03, 02:04   #18
cheesehead
 
cheesehead's Avatar
 
"Richard B. Woods"
Aug 2002
Wisconsin USA

11110000011002 Posts
Default

Quote:
Originally Posted by S00113 View Post
I wonder what hardware those bit levels were calculated on.
The levels aren't very sensitive to hardware type. Modest differences in TF efficiency across hardware don't make much difference when the steps are powers of 2.

But since you asked, here's the relevant part of v25.2 source module commonc.h:

(Answer: 2.0 GHz P4 Northwood)

Code:
/* Factoring limits based on complex formulas given the speed of the */
/* factoring code vs. the speed of the Lucas-Lehmer code */
/* As an example, examine factoring to 2^68 (finding all 68-bit factors). */
/* First benchmark a machine to get LL iteration times and trial factoring */
/* times for a (16KB sieve of p=35000011). */
/* We want to find when time spend eliminating an exponent with */
/* trial factoring equals time saved running 2 LL tests. */
 
/* runs to find a factor (68) *
 #16KB sections (2^68-2^67)/p/(120/16)/(16*1024*8) *
 factoring_benchmark = 2.0 * LL test time (p * ll_benchmark)
 
 simplifying:
 
 68 * (2^68-2^67)/p/(120/16)/(16*1024*8) * facbench = 2 * p * llbench
 68 * 2^67 / p / (120/16) / 2^17 * facbench = 2 * p * lltime
 68 * 2^49 / p / (120/16) * facbench = p * lltime
 68 * 2^49 / (120/16) * facbench = p^2 * lltime
 68 * 2^53 / 120 * facbench = p^2 * lltime
 68 * 2^53 / 120 * facbench / lltime = p^2
 sqrt (68 * 2^53 / 120 * facbench / lltime) = p
*/
 
/* Now lets assume 30% of these factors would have been found by P-1.  So
   we only save a relatively quick P-1 test instead 2 LL tests.  Thus:
 sqrt (68 / 0.7 * 2^53 / 120 * facbench / lltime) = p
*/
 
/* Now factor in that 35000000 does 19 squarings, but 70000000 requires 20.
   Thus, if maxp is the maximum exponent that can be handled by an FFT size:
 sqrt (68 / 0.7 * 2^53 / 120 *
       facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime) = p
*/
 
/* Now factor in that errors sometimes force us to run more than 2 LL tests.
   Assume, 2.04 on average:
 sqrt (68 / 0.7 * 2^53 / 120 *
       facbench * (1 + LOG2 (maxp/35000000) / 19) / lltime / 1.02) = p
*/
 
/* These breakeven points we're calculated on a 2.0 GHz P4 Northwood: */
 
#define FAC80 516000000L
#define FAC79 420400000L
#define FAC78 337400000L
#define FAC77 264600000L
#define FAC76 227300000L
#define FAC75 186400000L
#define FAC74 147500000L
#define FAC73 115300000L
#define FAC72 96830000L
#define FAC71 75670000L
#define FAC70 58520000L
#define FAC69 47450000L
#define FAC68 37800000L
#define FAC67 29690000L
#define FAC66 23390000L
 
/* These breakevens we're calculated a long time ago on unknown hardware: */
 
#define FAC65 13380000L
#define FAC64 8250000L
#define FAC63 6515000L
#define FAC62 5160000L
#define FAC61 3960000L
#define FAC60 2950000L
#define FAC59 2360000L
#define FAC58 1930000L
#define FAC57 1480000L
#define FAC56 1000000L
Quote:
I assume the optimal setting must be different on AMD64 (64 bit), because it is so much faster at trial factoring than LL-testing.
Do you mean that AMD64 is relatively faster than Intel at TF, but not so different at LL?

Quote:
One factor found by trial factoring saves two LL tests.
... or one P-1 test.

Quote:
For some exponents AMD64 owners shold probably factor one bit deeper to potentially save the LL test.
Since each successive bit level doubles TF time, there has to be quite a differential to justify it. Will you run the numbers for us?

Quote:
P-1 complicates matters as well. A given factor of n bits have x probability of beeing found by P-1 to B1=y, B2=z.
Note the 30% rule-of-thumb in the code comments.

Quote:
I guess some hardware is better at stage 1. If your RAM is fast comared to your CPU, you are probably better off by a lower B1 and higher B2. Perhaps the limits should be different for each computer based on CPU and RAM speed.
Prime95 already computes, based on CPU type and amount of RAM, the optimum B1 and B2. That's why you can see that some adjacent exponents that have already been P-1'd have had notably different B1/B2. Example: 40000217 has been P-1'd to 650000,650000, while 40000231 has been P-1'd to 410000,3382500.

Last fiddled with by cheesehead on 2008-11-03 at 02:13
cheesehead is offline   Reply With Quote
Old 2008-11-03, 07:30   #19
ckdo
 
ckdo's Avatar
 
Dec 2007
Cleves, Germany

2×5×53 Posts
Default

Quote:
Originally Posted by S485122 View Post
The values I have (from the sources of v24 and v25) are :
Code:
Bits    up to Exponent
56      1 000 000
57      1 480 000
58      1 930 000
59      2 360 000
60      2 950 000
61      3 960 000
62      5 160 000
63      6 515 000
64      8 250 000
65     13 380 000
66     23 390 000
67     29 690 000
68     37 800 000
69     47 450 000
70     58 520 000
71     75 670 000
72     96 830 000
73    115 300 000
74    147 500 000
75    186 400 000
76    227 300 000
77    264 600 000
78    337 400 000
79    420 400 000
80    516 000 000
[...]

This would mean that exponents above 516M should be factored to 81 bits at least.
You are actually off by one bit. The right column should be labeled "Starting at exponent"...
ckdo is offline   Reply With Quote
Old 2008-11-03, 17:19   #20
S485122
 
S485122's Avatar
 
"Jacob"
Sep 2006
Brussels, Belgium

5·349 Posts
Default

Indeed. I goofed up there.

Jacob
S485122 is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Normalising rent levels Bundu Math 4 2017-09-27 06:14
Racism or low light levels or...? jasong jasong 2 2016-09-25 05:07
Missing bit levels? NBtarheel_33 Data 6 2016-05-31 15:27
skipped bit levels tha PrimeNet 151 2016-03-17 11:38
Is the data missing or did we miss a couple TF bit levels petrw1 PrimeNet 2 2015-05-07 05:09

All times are UTC. The time now is 14:06.


Sun Oct 24 14:06:16 UTC 2021 up 93 days, 8:35, 0 users, load averages: 1.05, 1.13, 1.14

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.