![]() |
|
|
#12 | |
|
Serpentine Vermin Jar
Jul 2014
2·13·131 Posts |
Quote:
http://www.mersenne.org/report_expon...exp_hi=&full=1 Well, okay, and another: http://www.mersenne.org/report_expon...exp_hi=&full=1 |
|
|
|
|
|
|
#13 | |
|
"Bob Silverman"
Nov 2003
North of Boston
5·17·89 Posts |
Quote:
not that high....... |
|
|
|
|
|
|
#14 | |
|
"Serge"
Mar 2008
San Diego, Calif.
32×7×163 Posts |
I think mersenne.ca pages explains the process fairly well and in pictures.
It's not like the user fiddled with the B1, B2 bounds. See the line "PrimeNet" on those links: http://www.mersenne.ca/exponent/54433487 Code:
PrimeNet 600,000 15,000,000 Actual 635,000 16,510,000 (was higher than PrimeNet's recommendation) Do you suggest server to send much higher B2s to the clients in hope that more factors will be found? Systematically? That would steal CPU time from actual LL. Let's see your example #2 which illustrates that: http://www.mersenne.ca/exponent/2001049 Here someone did (deliberately) run some very high B1/B2 P-1 which still didn't have a chance to catch the factor! It was a bunch wasted time. 29GHz-days to do a futile P-1 when 0.13GHz-days does LL. Seriously in need of a tune-up priorities some users have. But that's what tickles their fancy, or something - so let them. __________________ Quote:
There is no probability involved (when you retrospectively look at a known factor). Simply observe the factorization of F-1 (in case of Mersenne's input, never mind p itself) and compare the prime factors to B1 and B2. It is easy to see that these two examples above could not find the factor as run (unless B2 was even higher). In both cases the last factor is very large. |
|
|
|
|
|
|
#15 |
|
Serpentine Vermin Jar
Jul 2014
2×13×131 Posts |
|
|
|
|
|
|
#16 |
|
"Serge"
Mar 2008
San Diego, Calif.
101000000111012 Posts |
Everything does look okay, then. You should have shown some cases where the factor was indeed missed. We thought that that's was these two were, but they weren't.
As to the thread title question, "(Let's) Require P-1 and other factoring work to be done on reliable machines!" -- well, it begs the question, who's gonna be left to mind the proverbial store (i.e. run the LL tests). The crappy machines?! ;-) |
|
|
|
|
|
#17 | |
|
Feb 2010
Sweden
AD16 Posts |
Quote:
. I thought that I did Pm1 after the factor was found, but it is not the case. Sometimes I do such idiotic tasks, I admit, to try to find a second factor. I have an unhealthy interest of finding factors in 2.00M-2.01M range, that is publicly known. Sometimes I do the evil trick of informing Primer95 about the existence of a factor for the exponent, so even with higher bonds it will not be reported. Actually a lot of the misses might be date stamp problem of cases where the Pm1 was done with the previous factor already known. In the dawn of time James corrected for such cases in his database, since I was in the top 3 for missed Pm1 factors in the database. In mersenne.org misses are for a different reason though, there I am not noticed.
Last fiddled with by bloodIce on 2015-05-05 at 06:10 Reason: better wording and facts |
|
|
|
|
|
|
#18 | |
|
"Bob Silverman"
Nov 2003
North of Boston
166158 Posts |
Quote:
It is clear, at least in the case(s) that I cited that there was no such miss. |
|
|
|
|
|
|
#19 |
|
Dec 2002
881 Posts |
To further clarify my intentions:
My most recent found exponent 10444177 has had some P-1 being done on it by some unknown machine. Given the B1 and B2 values that were used the machine should have found this factor, unless, and only unless, the machine was flaky. As the factor found later proved, the machine was flaky. There are many such exponents, with a reported P-1 factoring effort done on it, with given B1 and B2 bounds that are later proven to be sufficient to guarantee finding a factor unless the machine is faulty. It looks to me as if more flaky machines have been set to do P-1 work only than the 'sannerud | laptop' alone. The costs of having mprime and prime95 doing a number of P-1 test runs against a set of known smooth factors may cost only an hour per machine and can save much more hours of work by not having to do LL tests on machines with a small or K-smooth factor. Failure to find all these factors should then disqualify the machine for P-1 work. |
|
|
|
|
|
#20 |
|
"Bob Silverman"
Nov 2003
North of Boston
5×17×89 Posts |
|
|
|
|
|
|
#21 | |
|
Serpentine Vermin Jar
Jul 2014
2×13×131 Posts |
Quote:
I narrowed it down as much as I could in a limited time... cases where it was at least *possible* that a prior P-1 run had missed something (without looking at the B1/B2 of that prior run).Then I pulled up a few examples and I never saw anything that said "Hey, wait a minute... that P-1 check should have found that factor that so-and-so found later on doing their own P-1" ... I just wasn't seeing it. But again, I didn't look at each and every case... I narrowed it down to less than 100 where it could have happened, and in the 10 or so I eyeballed, nothing strange jumped out. |
|
|
|
|
|
|
#22 |
|
Serpentine Vermin Jar
Jul 2014
1101010011102 Posts |
If someone wants to provide me with an easily digestible formula (as in it could be done in SQL) to take a found-factor and then see if prior P-1 "misses" with certain bounds should have found it, no doubts or ifs/ands/buts, then I'll try to query the data for cases like that.
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Redoing factoring work done by unreliable machines | tha | Lone Mersenne Hunters | 23 | 2016-11-02 08:51 |
| Why does mersenneforum.org sometimes require registration to read? | Xyzzy | Lounge | 45 | 2014-01-22 14:44 |
| Work transfer between 32 and 64 bit machines | tichy | Software | 11 | 2011-01-07 22:57 |
| LL no factoring work type | edorajh | Information & Answers | 1 | 2010-04-16 16:55 |
| does Windows XP require more memory now? | ixfd64 | Lounge | 7 | 2009-06-24 03:36 |