mersenneforum.org > Data mersenne.ca
 Register FAQ Search Today's Posts Mark Forums Read

2020-07-10, 17:56   #353
petrw1
1976 Toyota Corona years forever!

"Wayne"
Nov 2006

17·263 Posts

Quote:
 Originally Posted by masser I have a feature request for mersenne.ca. Would it be possible to add average B1, B2 bounds for prior P-1 factorization attempts to the tables that list how far (in bit levels) that ranges have been trial-factored? Does anyone else believe this might be useful?
While I agree there is real value in knowing how the current P-1 has impacted TF and vice versa the comparison is not that straight forward.

In my experience doing LOTS of aggressive P-1 and TF in the last few years I have seen the following:
- Where the majority of the exponents have had GOOD B1/B2 P-1 done the TF success rate drops from the advertised 1/(Bits-1) to about 1/100.
- Similarly where excess TF has been done the P-1 success rate will drop. I don't have numbers but that can be determined here.

With a given B1/B2 factors will be found in a fairly large range of bit levels; though clearly with fewer in the highest bit levels.
However, I don't believe there is a formula to calculate or even estimate it. Someone can correct me.

I suggest one way to estimate TF impact from P-1 you can look here
The process is somewhat tedious but basically:
- Inspect all the factors found in your range of interest that were found via P-1
- Determine the bit level of each factor from the menu here bottom right
- Count how many factors P-1 found in your desired bit level.
- Expect TF to find roughly that many fewer than expected.

Warning: Statistics can be deceptive with smaller sample sizes.

2020-07-10, 18:16   #354
masser

Jul 2003

1,483 Posts

Quote:
 Originally Posted by petrw1 I suggest one way to estimate TF impact from P-1 you can look
See, I hadn't thought of that potential use at all. What I was imagining is a factor search where the searcher asks "in this range of exponents that has been highly trial-factored by GPUs, where is the subrange with the lowest B1/B2 pairs that I can attack with additional P-1."

Last fiddled with by masser on 2020-07-10 at 18:17

2020-07-10, 18:52   #355
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

31·103 Posts

Quote:
 Originally Posted by masser If it's too much work or others don't deem it important enough, I understand. Just spitballing here.
It would be more work than just adding another column since the tables that drive this section aren't ideally suited to storing such numbers. I suppose it could be made to work if people thought it would be very useful, but it would take some effort.

Last fiddled with by James Heinrich on 2020-07-10 at 18:54

2020-07-10, 18:57   #356
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

319310 Posts

Quote:
 Originally Posted by masser What I was imagining is a factor search where the searcher asks "in this range of exponents that has been highly trial-factored by GPUs, where is the subrange with the lowest B1/B2 pairs that I can attack with additional P-1."
If you're looking for P-1 work to do that has been done badly before, you're looking for Worst P-1 Factoring Effort.

 2020-07-10, 20:03 #357 Uncwilly 6809 > 6502     """"""""""""""""""" Aug 2003 101×103 Posts 8,933 Posts I looked at the Worst P-1 Factoring Effort list. In the 46 and 47,000,000 range it shows some as only having a single LL test. But that is bellow the Current DC milestone. Attached Thumbnails
2020-07-10, 20:14   #358
masser

Jul 2003

1,483 Posts

Quote:
 Originally Posted by James Heinrich If you're looking for P-1 work to do that has been done badly before, you're looking for Worst P-1 Factoring Effort.
I know. Even from the tables, I was only a click or two away from the mersenne.org sortable tables. I just thought having the averages available at the different range depths (100M, 10M, 1M, 100K, 10K) might be informative.

I also thought that graphs, similar to the bit level depth graphs for B1/B2 values would be a useful visualization, telling us where the ultra-deep, second-pass, first-pass P-1 wavefronts might be.

I understand it's a lot of work. Thanks for the consideration and feedback.

 2020-07-14, 18:27 #359 kriesel     "TF79LL86GIMPS96gpu17" Mar 2017 US midwest 47·101 Posts Try checking 90M for worst P-1. Of all the results returned, one indicated 1 LL, the others are blank. Maybe they were PRP and the form doesn't handle that case yet. And oddly, despite the P-1 bounds being inadequate, instead of issuing the requested P-1 assignment for M90001391, manual assignment issued an unwanted DC instead, necessitating an unreserve. Could we get a check box "issue no substitute assignment types" please? Last fiddled with by kriesel on 2020-07-14 at 18:37
2020-07-14, 20:17   #360
kruoli

"Oliver"
Sep 2017
Porta Westfalica, DE

2·5·37 Posts

Quote:
 Originally Posted by James Heinrich
Would it be feasible to extend the upper maximum of "probability range" on that page, maybe with an override checkbox, for those crazy people like me that sometimes want to do higher P-1 on small exponents?

Last fiddled with by kruoli on 2020-07-14 at 20:18 Reason: Spelling error.

2020-07-14, 21:23   #361
masser

Jul 2003

1,483 Posts

Quote:
 Originally Posted by kruoli Would it be feasible to extend the upper maximum of "probability range" on that page, maybe with an override checkbox, for those crazy people like me that sometimes want to do higher P-1 on small exponents?
seconded.

2020-07-15, 14:42   #362
James Heinrich

"James Heinrich"
May 2004
ex-Northern Ontario

31·103 Posts

Quote:
 Originally Posted by kruoli Would it be feasible to extend the upper maximum of "probability range" on that page
Naturally it's possible, but it comes at the cost of an exponential increase in candidates (and a bit more time required to generate the list every night).

Example numbers (these are from 6-month old data so aren't currently exact, but show the general trend):
Code:
1% =     2,649 rows
2% =   184,745 rows
3% =   496,116 rows <-- current setting
4% = 1,325,638 rows
This page is supposed to help find badly-PM1'd exponents, I don't think PM1 probability >3% really qualifies under that definition.

But, keeping an open mind, what kind of % limit and exponent range were you thinking of?

 2020-07-15, 15:08 #363 kruoli     "Oliver" Sep 2017 Porta Westfalica, DE 2×5×37 Posts So that data is not generated on demand (like an SQL query on the database, but rather a small query on that precalculated data you were mentioning)? I was thinking about limiting the number of results shown to reduce impact, but that doesn't really help with the generation of the precalculated list. One could now propose some new upper percentage like 5 % (IMHO at least somewhat reasonable) or 10 % (still applicable for smaller exponents (such with up to six digits and smaller ones with seven digits), but only for those folks who really want to push the factoring), but since the general idea of that site is - as you said - working on really poorly P-1'ed exponents, higher percentages would only make sense if we could have them without negative side effects like slower loading times for everyone else that is looking in the 0-3 % range. Another idea would be to see which exponents had the least GHz-days spent on them P-1'ing, of course that's most likely a vastly different story. Maybe it is "necessary" to have a separate site for the high bound P-1'ing community. By no means I want to say that you have to do this, James, of course. If there is a feasible solution to go way higher, maybe we have to work something out for ourselves, especially since the target group has a quite small amount of members. In the meantime, increasing it to 5 % might be a good compromise if you agree, but that's up to you, I'm just shooting into the dark.

 Similar Threads Thread Thread Starter Forum Replies Last Post GP2 mersenne.ca 44 2016-06-19 19:29 LaurV mersenne.ca 8 2013-11-25 21:01 siegert81 Math 2 2011-09-19 17:36 optim PrimeNet 13 2004-07-09 13:51

All times are UTC. The time now is 04:23.

Wed Dec 2 04:23:52 UTC 2020 up 83 days, 1:34, 1 user, load averages: 2.60, 2.55, 2.38