20190811, 18:27  #1 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
D56_{16} Posts 
Duplication of TF, P1, etc. (please don't)
Please apply computing resources efficiently, avoiding needless duplication.
Unless you know the computing hardware used to do the initial TF was defective, please do not rerun the same exponent/bitlevels that have already been run. There are many examples of duplication of same tf levels same exponent by different users. This is how it should look for more recent exponents; once for each bitlevel range. https://www.mersenne.ca/exponent/85575233 But if the lower bit levels drop off the list, that does not mean they were not performed. They were, and then were subsequently dropped from the database years later. An example of wasteful duplication is https://www.mersenne.org/report_expo...exp_hi=&full=1 An example of an exponent that had full TF but subsequently some being removed from the database is 38000009 It also seems wasteful to do additional TF and P1 after an exponent has matching composite primality test results, as in http://www.mersenne.ca/exponent/48000467 https://www.mersenne.org/report_expo...exp_hi=&full=1 In some examples, the <64 bits was not skipped, it's just they're 8 or more years old and have been removed from the database for size reasons. This exponent had duplicated TF, duplicated P1 to the same bounds, and triple matching LL tests, so wasted efforts in all 3 computation types. https://www.mersenne.org/report_expo...exp_hi=&full=1 Two matching LL or 2 matching PRP are intended. Three or more matching are generally considered a waste, except for the initial confirmation of a newly discovered Mersenne prime. This exponent has an extraordinary amount of factoring and LL testing effort after May 2007 when matching LL tests were completed. Duplication of past tf, TF to much higher bit levels, P1 factoring, and 4 unneeded LL tests, including two by the same person. https://www.mersenne.org/report_expo...exp_hi=&full=1 Unnecessary unproductive duplication of effort slows the progress of GIMPS toward finding the next Mersenne prime. Last fiddled with by kriesel on 20190811 at 18:28 
20190811, 23:23  #2 
Bemusing Prompter
"Danny"
Dec 2002
California
2·3^{2}·5^{3} Posts 
There's an unofficial subproject to get less than 20 million exponents without known factors. This is the main reason some of us are doing TF and P1 on known composite numbers.
Also, a triple check often isn't intentional. If an assignment expires and the exponent is assigned to someone else, and then the original assignment is completed anyway, then this could result in more than two LL results. Last fiddled with by ixfd64 on 20190811 at 23:25 
20190812, 01:22  #3  
If I May
"Chris Halsall"
Sep 2002
Barbados
2^{2}×3^{2}×5×7^{2} Posts 
Quote:
Some like finding factors. And for the example you provided, the first P1 run only did Stage 1, while the second run did an appropriately deep P1 with both Stages 1 and 2. Again, not all of us here are after finding the next MP, although *many* put in a ***considerable*** amount of effort helping others do so. 

20190812, 02:11  #4  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
17·251 Posts 
Quote:
While none of this factoring will find a prime; much of the work done here has eliminated DC work. 

20190812, 02:32  #5 
Jul 2003
3^{3}×47 Posts 
Some of the excessive duplication could be the result of people testing their hardware and/or software. The database is chock full of results that can be used to verify a user's setup.

20190812, 02:46  #6  
Bemusing Prompter
"Danny"
Dec 2002
California
2250_{10} Posts 
Quote:
Last fiddled with by ixfd64 on 20190812 at 02:46 

20190812, 04:36  #7 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2×3×569 Posts 
mersenneforum.org > Great Internet Mersenne Prime Search
(not Great Internet Mersenne Factor Search) Factors are a means to an end. The end is primes discovery. That's the primary goal. Repeating smallfft tests of years ago is not as good a test as the built in tests of the actual fft sizes used for the exponents to be run in the future. Last fiddled with by kriesel on 20190812 at 05:07 
20190812, 05:11  #8 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2^{3}×3×17×19 Posts 
My kit, my choice.
If I want to burn my electrons on doing "useless" work I can. I can also join a different project. Yes, at times people may not understand that they are wasting FLOPs. But sometimes they do and choose to do it anyway. Your 'hail mary' factoring runs on samuel's numbers are like that. Anytime that someone claims to have found the next prime, people fire up redundant TF and P1 efforts, for the same reason you did your work. And it is frequent that people start from the lowest bit levels. 
20190812, 05:49  #9  
Einyen
Dec 2003
Denmark
2×13×107 Posts 
Quote:
I might test more in the future but no plans right now. As Uncwilly sajd it is everyone's right to "waste" own their resources if they want, and it might is probably not wasteful in their eyes. Last fiddled with by ATH on 20190812 at 05:51 

20190812, 06:58  #10  
Sep 2003
A12_{16} Posts 
Quote:


20190812, 07:30  #11  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2·3·569 Posts 
Quote:
It's not about making anyone wrong. It's about having the awareness to choose well. Nobody is making exceptional claims about 1000003. The twentieth or hundredth duplicated test on it does not advance the project. Repeated nofactor P1 factorings of the same exponent to the same bounds by the same user are not good tests even of different hardware and software.There's a list of known exponents, bounds, and factors, for that. And no utility or benefit to reporting duplicate factoring runs to the primenet server and cluttering it with duplicates. But exponents like those of samuel, cochet and other claimants would get tested eventually anyway. They just got addressed earlier than they otherwise would. The samuel's exponents runs I made also were experience with unusually high tests_saved values in prime95 P1. I'm careful to make many of my runs ahead of the pack do double or triple duty or more. For example, P1 runs that determine the limits of CUDAPm1 on a specific gpu model, that probes the software's capability, that establishes run time scaling, generating bug reports; documenting limits and workarounds, and preparing the way for other scouting runs (PRP or LL software) ahead of the pack also. Runs are staggered and well spaced, the opposite of duplicated effort. And similarly testing gpuOwL and prime95 releases in various ways, and providing feedback to the authors. As do others. We've had a variety including some pretty extreme cases of wasted cycles, due to lack of awareness of new users. (A cpu year wasted on primality testing a 100Mdigit exponent that had already been factored is one example.) This thread could help that awareness, and help efficiency. Including for some secondary goals. The learning curve is long. Over 150 separate reference posts in https://www.mersenneforum.org/showthread.php?t=24607 and growing. Last fiddled with by kriesel on 20190812 at 07:30 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Duplication of Effort for Smaller Aliquot Sequences  EdH  Aliquot Sequences  3  20180417 13:31 
Duplication of work: local vs. db  EdH  Aliquot Sequences  2  20101231 04:30 
2801^791; thoughts on duplication sampling  fivemack  Factoring  0  20100415 22:23 