![]() |
|
|
#1 | |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
For a Mersenne candidate with exponent in the 332M range, that has been TF unsuccessfully to 80bits, what is the probability of it having a factor in 80-to-81 bits?
This page https://www.mersenne.org/various/math.php says: Quote:
Should I say that the best estimate for my question is p = 1/80? Does this match empiric data? For an example, if I can to PRP in 32days and TF 80-to-81 in 15hours, it seems to me that the TF80-81 is not worth it, if the probability is 1/80 (but it would be worth with a factor probability of 2%). Is that so? |
|
|
|
|
|
|
#2 | ||
|
"Forget I exist"
Jul 2009
Dumbassville
26×131 Posts |
Quote:
Quote:
|
||
|
|
|
|
|
#3 |
|
"Mihai Preda"
Apr 2015
3×457 Posts |
Thanks for pointing that line out.
Let's see: talking about M(p)=2^p - 1, and 'e' is Euler's constant. TF(k) means "has no factors less than 2^k". The page says that: if M(p) is TF(k), then the probability that M(p) is prime is (k-1)/(e*p). Let x be the probability of not finding a factor in TF from k to k+1. It follows that: (k - 1)/(e*p) == x * k/(e*p), so x == (k - 1)/k. So, the probability of *finding* a factor in TF k to k+1 is (1-x) == 1/k. That is, what we started from. |
|
|
|
|
|
#4 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
31×173 Posts |
P-1 factoring considers the same effort/reward tradeoff.
Factoring effort is adjusted according to an assumption about how many primality tests are saved by finding a factor, probability of finding a factor as a function of how much factoring effort is expended, and the effort expended per primality test, to maximize effort saved. Prime95 has stated that PRP will be double checked, as LL is. (Both the reliability of the computation and of the user is checked.) LL has error rate ~2% (without Jacobi check) so on average will have ~2.04 LL tests per unfactored exponent; perhaps 2.02 with Jacobi check. (Actually a little more because the 2% third check also has an error rate that may lead to a fourth check etc; an infinite series; 2.040816... or 2.0202..., but that 2% is so imprecisely known, series sums occupy unjustified precision.) PRP with GEC will have 2 + a nonzero but much smaller delta, probably small enough to ignore the delta. There's sometimes discussion of whether the double checks (and triple etc) should be considered, and equally or not, with one side claiming the double checks happen years later on average on faster hardware and so are not equally relevant, and another side claiming all computing effort is essentially equally valuable or costly at any given point so all factoring effort and primality checks ought be counted and considered in the tradeoff, by the current Ghzd/d rates of current hardware for TF, P-1, primality first test, and primality DC. The latter makes more sense to me. Any attempt at optimization is complicated, since the ratio of TF to primality testing throughput differs among gpus and cpus somewhat, as well as drastically between gpus as a whole and cpus as a whole, and there may not be good statistical data about the population mix of devices being applied in GIMPS, and those values vary for a single device according to .exponent and other variables. (In my little sample of hardware, for example, the TF/LL Ghzd/day ratios range from 11.3 to 15.5 on gpus, 22.5 on igp, and 0.72 to 1.25 on cpus. P-1/LL also varies.) Efficiency gains in primality testing such as Preda's made periodically in OpenCl on gpuOwL, Prime95 on cpus, etc., also enter into the tradeoff computations, shifting the optimal point, as would efficiency gains in factoring. Last fiddled with by kriesel on 2018-08-11 at 15:36 |
|
|
|