![]() |
|
|
#1 |
|
"GIMFS"
Sep 2002
Oeiras, Portugal
2·7·113 Posts |
As per the Recent Cleared report, user TJAOI has found another significantly large number of factors too small to be still undiscovered (between 59 and 60 bits, when most of the numbers had been TFed well into the mid to high 60s, and some to low 70s).
Any chance of tracing the erroneous results to some particular user, as it was already done in the past? |
|
|
|
|
|
#2 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17·487 Posts |
No.
My recommendation is that when GPUs start testing these larger exponents we have the GPU retest for these small factors -- would probably take just a few seconds for each exponent. |
|
|
|
|
|
#3 | |
|
Nov 2008
509 Posts |
Quote:
Which is very odd when according to the database ALL numbers are at least 61 bits.... ...clearly then the database is wrong, all that factoring work was never done, or we have lost ten's of thousands of results... |
|
|
|
|
|
|
#4 | |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
123158 Posts |
Quote:
Because if TF found factors much lower than 59 bits it would not have even looked further. And pardon again what might be lack of info but as I understand GPU factoring software it is not (easily?) capable of TF under 64 Bits? At least my MFAKTC can't. Last fiddled with by petrw1 on 2015-04-17 at 21:37 |
|
|
|
|
|
|
#5 |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17×487 Posts |
|
|
|
|
|
|
#6 |
|
Nov 2008
7758 Posts |
So shouldn't we then start a sub project to refactor EVERYTHING to at min 61 bits, or preferably 64 where gpu really takes over?
Yes, I am aware that it won't find any more primes, but it will tidy things up and move to the factored column all of those that should be there...you know, for completeness. |
|
|
|
|
|
#7 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
205716 Posts |
Quote:
Let's do some back-of-the-envelope calculations: We are looking at ~40 missed factors out of about 30M exponents in the 100M to 1B range. Assuming a 1 in 60 success rate for TF, that's 500,000 factors to be found. Assume TJAOI is only reporting results for 5% of the 59-60 bit search in this batch. That's a miss rate of 800 in 500,000, or 0.2%. This is not a huge inefficiency, but worth correcting with a TF double-check either by GPU or, ugh, prime95. After all, if we are factoring these big exponents to say 2^79, redoing TF to 2^64 costs only 1/32768 of TF to 2^79. |
|
|
|
|
|
|
#8 | |
|
Nov 2008
50910 Posts |
Quote:
100069 from 2 to 60 bits, about 18 minutes. 1055231 - from 2 to 61 bits just over 2 minutes It can't check those under 100k. I am running version 0.21 |
|
|
|
|
|
|
#9 | |
|
"GIMFS"
Sep 2002
Oeiras, Portugal
2·7·113 Posts |
Quote:
mfaktc is capable of TFing to lower levels than 64. It´s just that before version 0.21 the sieving was performed in the CPU - less effective and would use one CPU core. From 0.21 the sieving is done by the GPU even for lower levels. A lot better... |
|
|
|
|
|
|
#10 |
|
"GIMFS"
Sep 2002
Oeiras, Portugal
30568 Posts |
|
|
|
|
|
|
#11 |
|
Romulan Interpreter
"name field"
Jun 2011
Thailand
41·251 Posts |
Both mfaktX can factor below 64 bits, just that they are not so efficient as factoring higher (the speed is about half, see below). This is related to the kernels and to cpu/gpu sieving. I am doing 28-72 bits (or higher, depending on the exponent) for all exponents that I get assigned for LL since long time ago, when I found that my LL report start filling with "exponents for which a factor was found later".
OTOH, James did "factoring by k" for below 50+ bits, and Tjaoi went higher, then he switched to "by p". I don't think it is necessary to re-do "all" low factoring, for high exponents. I did by myself "low factoring" (below 40 bits completely, by k) and few ranges (few millions) to 66 and 67 bits in the past and didn't find any missed factor*, see the discussion in the former "missed factors" or "user tjaoi" threads (don't know the actual name). That activity is loss of time, and other users (axn?) also pointed, at that time. For the sake of calculus, taking an exponent from below 50 (40, etc) bits to 64 bits is done by mfaktc in a single step, using a slow kernel, and it takes the same amount of time X as taking the same exponent from 65 to 66. Taking it from 64 to 65 is half of that time, X/2. Even if you can do all these 3 steps in ten seconds in average (you can't**), there are still ~20M exponents to go through, which would be about 6 years and half. Divide this by the number of participants (we are just a handful of people with GPUs giving our breath here). I thinks is not worth. OTOH, in the same time, the same GPU-power could LL about 160 exponents***, or 80 if we DC too, so if we find more than 80 missing factors, this may be worth... Well, it is not for me to decide, but I may join in, if it is decided. ------- * I found about 12000 factors, but none of them was the "first", all expos for which I found a factor had a smaller factor previously known ** I am talking 4 top-tier GPUs, and using the "less classes", which is much faster for this activity. i.e. low bitlevel and high exponent *** Calculated with an average of 15 days for a LL test. A Titan can LL a 80M in about 3 days, but it would need 60 days for a 332M expo Last fiddled with by LaurV on 2015-04-18 at 04:31 |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| missed factor? | tha | Data | 79 | 2021-11-19 15:55 |
| Factor missed by TF | bcp19 | PrimeNet | 15 | 2015-08-10 11:57 |
| P-1 Missed factor | tha | Data | 7 | 2014-04-30 20:54 |
| Missed factors | TheMawn | Information & Answers | 7 | 2014-01-10 10:23 |
| Missed small factors | dswanson | Data | 63 | 2004-11-24 04:30 |