20200710, 17:56  #353  
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
17·263 Posts 
Quote:
In my experience doing LOTS of aggressive P1 and TF in the last few years I have seen the following:  Where the majority of the exponents have had GOOD B1/B2 P1 done the TF success rate drops from the advertised 1/(Bits1) to about 1/100.  Similarly where excess TF has been done the P1 success rate will drop. I don't have numbers but that can be determined here. With a given B1/B2 factors will be found in a fairly large range of bit levels; though clearly with fewer in the highest bit levels. However, I don't believe there is a formula to calculate or even estimate it. Someone can correct me. I suggest one way to estimate TF impact from P1 you can look here The process is somewhat tedious but basically:  Inspect all the factors found in your range of interest that were found via P1  Determine the bit level of each factor from the menu here bottom right  Count how many factors P1 found in your desired bit level.  Expect TF to find roughly that many fewer than expected. Warning: Statistics can be deceptive with smaller sample sizes. 

20200710, 18:16  #354 
Jul 2003
wear a mask
1,483 Posts 
See, I hadn't thought of that potential use at all. What I was imagining is a factor search where the searcher asks "in this range of exponents that has been highly trialfactored by GPUs, where is the subrange with the lowest B1/B2 pairs that I can attack with additional P1."
Last fiddled with by masser on 20200710 at 18:17 
20200710, 18:52  #355 
"James Heinrich"
May 2004
exNorthern Ontario
31·103 Posts 
It would be more work than just adding another column since the tables that drive this section aren't ideally suited to storing such numbers. I suppose it could be made to work if people thought it would be very useful, but it would take some effort.
Last fiddled with by James Heinrich on 20200710 at 18:54 
20200710, 18:57  #356  
"James Heinrich"
May 2004
exNorthern Ontario
3193_{10} Posts 
Quote:


20200710, 20:03  #357 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
8,933 Posts 
I looked at the Worst P1 Factoring Effort list. In the 46 and 47,000,000 range it shows some as only having a single LL test. But that is bellow the Current DC milestone.

20200710, 20:14  #358  
Jul 2003
wear a mask
1,483 Posts 
Quote:
I also thought that graphs, similar to the bit level depth graphs for B1/B2 values would be a useful visualization, telling us where the ultradeep, secondpass, firstpass P1 wavefronts might be. I understand it's a lot of work. Thanks for the consideration and feedback. 

20200714, 18:27  #359 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
47·101 Posts 
Try checking 90M for worst P1. Of all the results returned, one indicated 1 LL, the others are blank. Maybe they were PRP and the form doesn't handle that case yet.
And oddly, despite the P1 bounds being inadequate, instead of issuing the requested P1 assignment for M90001391, manual assignment issued an unwanted DC instead, necessitating an unreserve. Could we get a check box "issue no substitute assignment types" please? Last fiddled with by kriesel on 20200714 at 18:37 
20200714, 20:17  #360  
"Oliver"
Sep 2017
Porta Westfalica, DE
2·5·37 Posts 
Quote:
Last fiddled with by kruoli on 20200714 at 20:18 Reason: Spelling error. 

20200714, 21:23  #361 
Jul 2003
wear a mask
1,483 Posts 

20200715, 14:42  #362  
"James Heinrich"
May 2004
exNorthern Ontario
31·103 Posts 
Quote:
Example numbers (these are from 6month old data so aren't currently exact, but show the general trend): Code:
1% = 2,649 rows 2% = 184,745 rows 3% = 496,116 rows < current setting 4% = 1,325,638 rows But, keeping an open mind, what kind of % limit and exponent range were you thinking of? 

20200715, 15:08  #363 
"Oliver"
Sep 2017
Porta Westfalica, DE
2×5×37 Posts 
So that data is not generated on demand (like an SQL query on the database, but rather a small query on that precalculated data you were mentioning)? I was thinking about limiting the number of results shown to reduce impact, but that doesn't really help with the generation of the precalculated list.
One could now propose some new upper percentage like 5 % (IMHO at least somewhat reasonable) or 10 % (still applicable for smaller exponents (such with up to six digits and smaller ones with seven digits), but only for those folks who really want to push the factoring), but since the general idea of that site is  as you said  working on really poorly P1'ed exponents, higher percentages would only make sense if we could have them without negative side effects like slower loading times for everyone else that is looking in the 03 % range. Another idea would be to see which exponents had the least GHzdays spent on them P1'ing, of course that's most likely a vastly different story. Maybe it is "necessary" to have a separate site for the high bound P1'ing community. By no means I want to say that you have to do this, James, of course. If there is a feasible solution to go way higher, maybe we have to work something out for ourselves, especially since the target group has a quite small amount of members. In the meantime, increasing it to 5 % might be a good compromise if you agree, but that's up to you, I'm just shooting into the dark. 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Small inconsistencies between mersenne.org and mersenne.ca factor databases  GP2  mersenne.ca  44  20160619 19:29 
mersenne.ca (ex mersennearies.sili.net)  LaurV  mersenne.ca  8  20131125 21:01 
GaussianMersenne & EisensteinMersenne primes  siegert81  Math  2  20110919 17:36 
Mersenne Wiki: Improving the mersenne primes web site by FOSS methods  optim  PrimeNet  13  20040709 13:51 