![]() |
![]() |
#1 |
Feb 2008
Meath, Ireland
5×37 Posts |
![]()
I have a question about assignements on GPU72. When I get new ones, I have the option of getting "Trial Factoring on Double Check Candidates".
So, if I understand correctly this will give me exponents that have already been LLed once, but not DCed yet? And I'd be factoring at a higher bit level than what was done previously? If that's the case, why werent those exponensts factored before the exponent was LLed the first time? Aren't exponents TFed to a bit where the probability of finding a factor multiplied by time taken is smaller than the time it would take to LL? If so, did anything change from the time these were first TFed that makes it now worthwhile to TF to higher bit levels, even though we'll only be saving half the LL time (since it was already LLed once, and we're only saving the DC)? |
![]() |
![]() |
![]() |
#2 |
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2AC616 Posts |
![]()
In very broad terms:
They were TF'ed to the level that was appropriate at the time using Prime95 and CPU's. We now have GPU's and TF software for them. This raises the bit level that makes sense. Yes, it would make some sense not to go as far in TF as if they were not yet tested. Most of the DC work is LL, but there is some PRP DC too. Some of the people doing the TF in the DC range are doing it to help close the gap between DC and the FTC ranges. So, they are willing to higher than might make sense. |
![]() |
![]() |
![]() |
#3 | ||
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
23·661 Posts |
![]() Quote:
https://www.mersenneforum.org/showpo...77&postcount=1 I came up with this idea a few years ago when someone whimsically wondered on this forum if there would ever be less than 20,000,000 unfactored candidates here: https://www.mersenne.ca/status/tf/0/0/1/0 You can see we are now at: 20,754,134. At the time it was well over 21Million. So I extended the thinking like this: Quote:
The GPUto72 project is helping out by making available "Trial Factoring on Double Check Candidates" https://www.gpu72.com/reports/workers/dctf/ |
||
![]() |
![]() |
![]() |
#4 |
Feb 2008
Meath, Ireland
101110012 Posts |
![]()
OK, thanks both. So the biggest contributing factor (hehe) to the bit-level increase was the CPU vs GPU TFing speed?
Last fiddled with by ZFR on 2020-11-19 at 16:31 |
![]() |
![]() |
![]() |
#5 |
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
23·661 Posts |
![]() |
![]() |
![]() |
![]() |
#6 | |
Feb 2008
Meath, Ireland
B916 Posts |
![]() Quote:
Gotcha. Thanks. Last fiddled with by ZFR on 2020-11-19 at 16:36 |
|
![]() |
![]() |
![]() |
#7 |
Aug 2020
2×3×19 Posts |
![]()
The main reason for doing TF on DC candidates is to find factors, same as why we do ECM on very small exponents that are already double-checked decades ago. For most exponents in the DC range, TF'ing for a few more bit levels is still the most efficient known method for factoring, as compared with ECM or a P-1 with higher bounds.
Think of helping DC/closing the gap as some side product. If you only want to save the largest primality test time per factoring time spent, do TF (or P-1) on first time check candidates. Last fiddled with by Ensigm on 2020-11-19 at 17:12 |
![]() |
![]() |
![]() |
#8 |
"/X\(‘-‘)/X\"
Jan 2013
47×67 Posts |
![]()
Until recently I was doing a lot of trial factoring on candidates that hadn't yet been double checked, since I like to direct my resources to double checking. In a sense I would be eliminating exponents before running LL on them.
But with the advent of PRP verification, it no longer makes sense to run a second LL on an exponent where there is only a single LL result: the PRP check will catch errors that LL won't, saving needing to make a third test, and the overhead of verifying the PRP run has been done is less than the cost of re-running LL when there is a mismatch. So I primarily do LL double checks where there are mismatches. And the value there is finding out which hardware was bad, so all that hardware's results can be checked early. And I'll likely match one of the two results. Otherwise doing a fresh PRP run is more efficient. Because I'm primarily targeting mismatches, it doesn't make sense for me to bulk TF DC exponents to higher levels. And I was about the only one doing that, outside of Wayne's < 20M project. All of the higher ranges have fewer unfactored exponents than the goal of that projects. |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
The nature of the "double check" | Tone Float | Information & Answers | 13 | 2016-03-11 16:52 |
Speeding up double checking when first test returns "prime" | Unregistered | PrimeNet | 16 | 2006-02-28 02:00 |
request: always include "from" in trial-factoring results | James Heinrich | Software | 1 | 2005-04-10 02:44 |
suggestion: "check exponent status" page | ixfd64 | Lounge | 3 | 2004-05-27 00:51 |
trial factoring of "small" mersenne numbers | antiroach | Lone Mersenne Hunters | 6 | 2003-07-16 23:35 |