View Single Post
Old 2019-03-14, 01:36   #2
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

116318 Posts
Default Why don't we double check TF runs or P-1 runs?

The "economics" of factoring runs are determined based on single factoring runs per type. That is, factoring is done and quit in a way to maximize the probable total project effort savings. The chance of finding a factor if the computation goes correctly, times the chance of an error occurring, times the chance of that error having hidden a factor as a result, times (1. - ~20% chance of the other factoring method finding the missed factor) is small enough it is too costly to double run all factoring to reduce the incidence of missed factors. It would use more effort than running the primality tests on those exponents whose related factors were missed. Or it would force reducing factoring effort to one less TF bit level and to lower P-1 bounds, reducing the estimated probability of finding a factor by about 1/77=0.013/exponent in TF and ~0.008/exponent in P-1, which translates again into more primality tests times two per unfactored Mersenne number that might otherwise have been factored.

If on the other hand we had a reliable indicator when an unrecoverable factoring error had occurred, it might be worthwhile to rerun just the attempts with detected errors. (If the error is reproducible, as sometimes occurs in CUDAPm1 for example, there is no point to the second attempt.)

Also, there is a sort of double-checking in place. User TJAOI is steadily running a different factoring approach, which finds potential factors, and then determines whether they belong to an exponent in the mersenne.org range. https://www.mersenneforum.org/showpo...postcount=3043


Top of reference tree: https://www.mersenneforum.org/showpo...22&postcount=1

Last fiddled with by kriesel on 2019-11-19 at 15:51
kriesel is online now