mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Software (https://www.mersenneforum.org/forumdisplay.php?f=10)
-   -   How many iterations does it take to TF an exponent (https://www.mersenneforum.org/showthread.php?t=11777)

James Heinrich 2009-04-24 11:29

How many iterations does it take to TF an exponent
 
How many iterations does it take to TF an exponent to a given bitdepth?

What I'm really looking for is how long it takes to TF an exponent to a given bitdepth, based on benchmark timings.
For example, if I want to TF M50835979 from 2^66 to 2^67, and benchmark says 5.083ms... ? :ermm:
I know I'll end up with 0.293994 GHz-days credit for it, but I don't have any idea how to figure out how long it will take.

Running some [i]very[/i] rough tests, I've come up with these numbers, for number of iterations at each bit depth:[code]
2^58 = 2850
2^59 = 5800
2^60 = 11250
2^61 = 23300
[/code]Presumably this pattern of approximate doubling continues. Can someone explain to me where these numbers come from (I suspect this is the number of candidate prime divisors in that range?) and what the actual numbers are? I assume it's not quite a linear doubling (that would mean there are no candidate divisors below 2^47!).
(Please use small words in your explanations, I don't really know what I'm doing :razz:)

Mini-Geek 2009-04-24 12:24

[quote=James Heinrich;170806](I suspect this is the number of candidate prime divisors in that range?)[/quote]
I'd suspect that as well, but I don't really know either.
[quote=James Heinrich;170806]Can someone explain to me where these numbers come from ... and what the actual numbers are?[/quote]
Look at the Trial Factoring section on this page:
[URL]http://www.mersenne.org/various/math.php[/URL]
"One very nice property of Mersenne numbers is that any factor q of 2^P-1 must be of the form 2kp+1. Furthermore, q must be 1 or 7 mod 8." This also explains why the same TF bit levels take a much shorter time on larger candidates. The only factors to search for are 2kp+1 (p here is the same p from 2^p-1 and k is just a number that is incremented to check if it divides), so as the p gets larger, the distance between each possible factor increases and the possible factors per bit level goes down. Could you try the same iteration counting on a larger candidate so we can see how much it goes down?
[quote=James Heinrich;170806]I assume it's not quite a linear doubling (that would mean there are no candidate divisors below 2^47!).[/quote]
Yeah it's probably not quite linear, but it's not quite as far off as you might expect - the smallest candidate for your M50835979 that is 2kp+1, prime, and 1 or 7 mod 8 is k=21, i.e. 2*21*50835979+1=2135111119 which is 31 bits, so below 2^30 there aren't even any possible divisors to check.
[quote=James Heinrich;170806](Please use small words in your explanations, I don't really know what I'm doing :razz:)[/quote]
Neither do I. :razz:

James Heinrich 2009-04-24 12:47

1 Attachment(s)
[QUOTE=Mini-Geek;170812]"...This also explains why the same TF bit levels take a much shorter time on larger candidates..."[/QUOTE]I realize I'm missing some key important things about how the number of candidates at a bitdepth varies on the size of the exponent, but unfortunately my grasp of mathematics is like a pair of Teflon pliers :cry:

Based on my above linear extrapolation from timings on M50xxxxxx, you can see I'm underestimating the number of candidates in small exponents and overestimating in larger exponents. But I'm lost as to how to get the correct numbers :sad:

Mini-Geek 2009-04-24 13:01

[quote=James Heinrich;170815]I realize I'm missing some key important things about how the number of candidates at a bitdepth varies on the size of the exponent, but unfortunately my grasp of mathematics is like a pair of Teflon pliers :cry:

Based on my above linear extrapolation from timings on M50xxxxxx, you can see I'm underestimating the number of candidates in small exponents and overestimating in larger exponents. But I'm lost as to how to get the correct numbers :sad:[/quote]
Most of the k's in 2kp+1 are eliminated because 2kp+1 is composite, so I'd expect the rate to be related to the number of primes at the different bit depths. There are about x/ln(x) primes below x (ln is the natural logarithm, i.e. base e logarithm). Now exactly how is that related? I have no idea. :smile:

ATH 2009-04-24 13:47

Setting PercentPrecision=6 in prime.txt and "Iterations between screen outputs" to 100 and 1000:

[code]Iterations between screen outputs 100:
2^54-2^55
Trial factoring M50835979 to 2^55 is 27.496692% complete. Time: 424.841 ms.
Trial factoring M50835979 to 2^55 is 54.715976% complete. Time: 424.511 ms.
2^55-2^56
Trial factoring M50835979 to 2^56 is 13.748347% complete. Time: 419.209 ms.
Trial factoring M50835979 to 2^56 is 27.357988% complete. Time: 441.562 ms.
2^56-2^57
Trial factoring M50835979 to 2^57 is 6.943525% complete. Time: 420.606 ms.
Trial factoring M50835979 to 2^57 is 13.817699% complete. Time: 419.578 ms.

Iterations between screen outputs 1000:
2^57-2^58
Trial factoring M50835979 to 2^58 is 34.578923% complete. Time: 4386.436 ms.
Trial factoring M50835979 to 2^58 is 69.096763% complete. Time: 4331.114 ms.
2^58-2^59
Trial factoring M50835979 to 2^59 is 17.337342% complete. Time: 4242.626 ms.
Trial factoring M50835979 to 2^59 is 34.648276% complete. Time: 4218.649 ms.
2^59-2^60
Trial factoring M50835979 to 2^60 is 33.469282% complete. Time: 4240.288 ms.
Trial factoring M50835979 to 2^60 is 42.137953% complete. Time: 4215.653 ms.
2^60-2^61
Trial factoring M50835979 to 2^61 is 4.338870% complete. Time: 4232.515 ms.
Trial factoring M50835979 to 2^61 is 8.673005% complete. Time: 4220.763 ms.
2^61-2^62
Trial factoring M50835979 to 2^62 is 2.169435% complete. Time: 4345.450 ms.
Trial factoring M50835979 to 2^62 is 4.336703% complete. Time: 4231.159 ms.[/code]

Which gives:
2^54-2^55: 367 iterations
2^55-2^56: 735 iterations
2^56-2^57: 1455 iterations
2^57-2^58: 2897 iterations
2^58-2^59: 5777 iterations
2^59-2^60: 11536 iterations
2^60-2^61: 23073 iterations
2^61-2^62: 46141 iterations

But these numbers are dependant on exponant p, since bitdepth is dependant on 2*k*p.

Kevin 2009-04-24 20:38

Conceptually, it seems like a good idea to remove all the composite 2*k*p+1. But I think in practice, they just remove the k that obviously give you something composite from working mod 210 (or some similar number with lots of prime factors). Beyond that, the time you spend checking potential factors for primality would exceed the time you'd save by not having to test composite factors.

TheJudger 2009-04-24 21:23

some estimates about the number of potential factors for Mp from 2^n to 2^(n+1):

Amount of numbers between 2^n and 2^(n+1): 2^n

Factors have to be in the form 2kp+1 so we can devide 2^n by 2p so we have now 2^n/(2p) candidates.

Another devide by 2 comes from the rule that q (q=2kp+1) must be 1 or 7 mod 8. We can ignore q = 3 or 5 mod 8 so thats the factor 2.
So we have now 2^n / (4p) candidates.

If q itself is composite we do not need to check if q devides Mp. Now it becomes a bit imprecisely... the mersenne website says:
"The sieve then eliminates any potential factors that are divisible by prime numbers below 40,000 or so. Also, bits representing potential factors of 3 or 5 mod 8 are cleared. This process eliminates roughly 95% of potential factors. The remaining potential factors are tested using the powering algorithm above."
We did the q = 3 or 5 mod 8 stuff allready so we've here "only" 90% of candidates removed.
So we have now 2^n * 0,1 / (4p) candidates.

Checking "ATHs numbers"
p=50835979, n=54: 8859079 candidates to check
p=50835979, n=55: 17718158 candidates to check
...

these number are NOT accurate because the sieving of composite q does not remove the same amount of candidates for different p. But it should be a good estimate.

James Heinrich 2009-04-26 11:26

I ran some test TF on 13 exponents ranging from 200K to 900M, at 58-62 bits, and the data was pretty consistent. So I came up with this approximate formula. It's not 100% accurate, but close enough for my purposes:[code]// Assuming M100,000,000 has 495 candidates at 2^58
// and the number of candidates doubles with each increase TF bit
// and the number of candidates is inversely proportional to the exponent
return (pow(2, $bits - 58) * 495) * (100000000 / $exponent);[/code]

TheJudger 2009-04-26 15:13

Hi James,

your formula seems to be a good approximation if you call it "candidate groups" instead of "candidates".

From my understanding of the code (commonb.c, factor32.asm, factor64.asm) Prime95 does the timing per candidate group and NOT per candidate.
For the x86 version there is a 4kiB sieve size which gives us ~32k factor candidates BEFORE the sieving process. (The x86-64 version uses a 12kiB sieve which holds ~97k factor candidates, the factor 3 is corrected in the benchmark)

James Heinrich 2009-04-27 00:48

[QUOTE=TheJudger;171037]your formula seems to be a good approximation if you call it "candidate groups" instead of "candidates".
... the factor 3 is corrected in the benchmark[/QUOTE]I don't think that's quite what you said, but my numbers seem to be off by about a factor of 3 -- are you saying that my calculation is (approx) correct if I multiply it by 3? Because that brings my calculations right where I expect them to be, I just don't understand the "why"... can you try explaining it again to me, please?

TheJudger 2009-04-27 16:22

Hi James,

the correction by factor 3 is for the builtin factoring benchmarks because the x86_64 sieve is 3 times bigger than the x86 sieve.

According to my formula your formula need a correction factor of ~145571. But this seems a bit to high for me since a whole sieve ("candidate group") has only 32k/97k candidates...

If the sieve is running on "mod-classes" (I'm sure it does) than it could be true that the correction factor is that big...

Did you use the x86 or the x86_64 version for your measurements?


All times are UTC. The time now is 10:02.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.