mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet

Reply
 
Thread Tools
Old 2020-07-10, 17:56   #353
petrw1
1976 Toyota Corona years forever!
 
petrw1's Avatar
 
"Wayne"
Nov 2006
Saskatchewan, Canada

103578 Posts
Default

Quote:
Originally Posted by masser View Post
I have a feature request for mersenne.ca. Would it be possible to add average B1, B2 bounds for prior P-1 factorization attempts to the tables that list how far (in bit levels) that ranges have been trial-factored? Does anyone else believe this might be useful?
While I agree there is real value in knowing how the current P-1 has impacted TF and vice versa the comparison is not that straight forward.

In my experience doing LOTS of aggressive P-1 and TF in the last few years I have seen the following:
- Where the majority of the exponents have had GOOD B1/B2 P-1 done the TF success rate drops from the advertised 1/(Bits-1) to about 1/100.
- Similarly where excess TF has been done the P-1 success rate will drop. I don't have numbers but that can be determined here.

With a given B1/B2 factors will be found in a fairly large range of bit levels; though clearly with fewer in the highest bit levels.
However, I don't believe there is a formula to calculate or even estimate it. Someone can correct me.

I suggest one way to estimate TF impact from P-1 you can look here
The process is somewhat tedious but basically:
- Inspect all the factors found in your range of interest that were found via P-1
- Determine the bit level of each factor from the menu here bottom right
- Count how many factors P-1 found in your desired bit level.
- Expect TF to find roughly that many fewer than expected.

Warning: Statistics can be deceptive with smaller sample sizes.
petrw1 is online now   Reply With Quote
Old 2020-07-10, 18:16   #354
masser
 
masser's Avatar
 
Jul 2003
wear a mask

2×5×139 Posts
Default

Quote:
Originally Posted by petrw1 View Post
I suggest one way to estimate TF impact from P-1 you can look
See, I hadn't thought of that potential use at all. What I was imagining is a factor search where the searcher asks "in this range of exponents that has been highly trial-factored by GPUs, where is the subrange with the lowest B1/B2 pairs that I can attack with additional P-1."

Last fiddled with by masser on 2020-07-10 at 18:17
masser is offline   Reply With Quote
Old 2020-07-10, 18:52   #355
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

7·419 Posts
Default

Quote:
Originally Posted by masser View Post
If it's too much work or others don't deem it important enough, I understand. Just spitballing here.
It would be more work than just adding another column since the tables that drive this section aren't ideally suited to storing such numbers. I suppose it could be made to work if people thought it would be very useful, but it would take some effort.

Last fiddled with by James Heinrich on 2020-07-10 at 18:54
James Heinrich is offline   Reply With Quote
Old 2020-07-10, 18:57   #356
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

293310 Posts
Default

Quote:
Originally Posted by masser View Post
What I was imagining is a factor search where the searcher asks "in this range of exponents that has been highly trial-factored by GPUs, where is the subrange with the lowest B1/B2 pairs that I can attack with additional P-1."
If you're looking for P-1 work to do that has been done badly before, you're looking for Worst P-1 Factoring Effort.
James Heinrich is offline   Reply With Quote
Old 2020-07-10, 20:03   #357
Uncwilly
6809 > 6502
 
Uncwilly's Avatar
 
"""""""""""""""""""
Aug 2003
101×103 Posts

100000111000102 Posts
Default

I looked at the Worst P-1 Factoring Effort list. In the 46 and 47,000,000 range it shows some as only having a single LL test. But that is bellow the Current DC milestone.
Attached Thumbnails
Click image for larger version

Name:	Question.png
Views:	16
Size:	153.3 KB
ID:	22778  
Uncwilly is offline   Reply With Quote
Old 2020-07-10, 20:14   #358
masser
 
masser's Avatar
 
Jul 2003
wear a mask

2×5×139 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
If you're looking for P-1 work to do that has been done badly before, you're looking for Worst P-1 Factoring Effort.
I know. Even from the tables, I was only a click or two away from the mersenne.org sortable tables. I just thought having the averages available at the different range depths (100M, 10M, 1M, 100K, 10K) might be informative.

I also thought that graphs, similar to the bit level depth graphs for B1/B2 values would be a useful visualization, telling us where the ultra-deep, second-pass, first-pass P-1 wavefronts might be.

I understand it's a lot of work. Thanks for the consideration and feedback.
masser is offline   Reply With Quote
Old 2020-07-14, 18:27   #359
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

22×3×353 Posts
Default

Try checking 90M for worst P-1. Of all the results returned, one indicated 1 LL, the others are blank. Maybe they were PRP and the form doesn't handle that case yet.

And oddly, despite the P-1 bounds being inadequate, instead of issuing the requested P-1 assignment for M90001391, manual assignment issued an unwanted DC instead, necessitating an unreserve.
Could we get a check box "issue no substitute assignment types" please?

Last fiddled with by kriesel on 2020-07-14 at 18:37
kriesel is offline   Reply With Quote
Old 2020-07-14, 20:17   #360
kruoli
 
kruoli's Avatar
 
"Oliver"
Sep 2017
Porta Westfalica, DE

251 Posts
Default

Quote:
Originally Posted by James Heinrich View Post
Would it be feasible to extend the upper maximum of "probability range" on that page, maybe with an override checkbox, for those crazy people like me that sometimes want to do higher P-1 on small exponents?

Last fiddled with by kruoli on 2020-07-14 at 20:18 Reason: Spelling error.
kruoli is online now   Reply With Quote
Old 2020-07-14, 21:23   #361
masser
 
masser's Avatar
 
Jul 2003
wear a mask

56E16 Posts
Default

Quote:
Originally Posted by kruoli View Post
Would it be feasible to extend the upper maximum of "probability range" on that page, maybe with an override checkbox, for those crazy people like me that sometimes want to do higher P-1 on small exponents?
seconded.
masser is offline   Reply With Quote
Old 2020-07-15, 14:42   #362
James Heinrich
 
James Heinrich's Avatar
 
"James Heinrich"
May 2004
ex-Northern Ontario

7×419 Posts
Default

Quote:
Originally Posted by kruoli View Post
Would it be feasible to extend the upper maximum of "probability range" on that page
Naturally it's possible, but it comes at the cost of an exponential increase in candidates (and a bit more time required to generate the list every night).

Example numbers (these are from 6-month old data so aren't currently exact, but show the general trend):
Code:
1% =     2,649 rows
2% =   184,745 rows
3% =   496,116 rows <-- current setting
4% = 1,325,638 rows
This page is supposed to help find badly-PM1'd exponents, I don't think PM1 probability >3% really qualifies under that definition.

But, keeping an open mind, what kind of % limit and exponent range were you thinking of?
James Heinrich is offline   Reply With Quote
Old 2020-07-15, 15:08   #363
kruoli
 
kruoli's Avatar
 
"Oliver"
Sep 2017
Porta Westfalica, DE

251 Posts
Default

So that data is not generated on demand (like an SQL query on the database, but rather a small query on that precalculated data you were mentioning)? I was thinking about limiting the number of results shown to reduce impact, but that doesn't really help with the generation of the precalculated list.

One could now propose some new upper percentage like 5 % (IMHO at least somewhat reasonable) or 10 % (still applicable for smaller exponents (such with up to six digits and smaller ones with seven digits), but only for those folks who really want to push the factoring), but since the general idea of that site is - as you said - working on really poorly P-1'ed exponents, higher percentages would only make sense if we could have them without negative side effects like slower loading times for everyone else that is looking in the 0-3 % range.

Another idea would be to see which exponents had the least GHz-days spent on them P-1'ing, of course that's most likely a vastly different story.

Maybe it is "necessary" to have a separate site for the high bound P-1'ing community. By no means I want to say that you have to do this, James, of course. If there is a feasible solution to go way higher, maybe we have to work something out for ourselves, especially since the target group has a quite small amount of members.

In the meantime, increasing it to 5 % might be a good compromise if you agree, but that's up to you, I'm just shooting into the dark.
kruoli is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Small inconsistencies between mersenne.org and mersenne.ca factor databases GP2 Data 44 2016-06-19 19:29
mersenne.ca (ex mersenne-aries.sili.net) LaurV Information & Answers 8 2013-11-25 21:01
Gaussian-Mersenne & Eisenstein-Mersenne primes siegert81 Math 2 2011-09-19 17:36
Mersenne Wiki: Improving the mersenne primes web site by FOSS methods optim PrimeNet 13 2004-07-09 13:51

All times are UTC. The time now is 15:46.

Wed Aug 12 15:46:39 UTC 2020 up 26 days, 11:33, 4 users, load averages: 2.82, 2.53, 2.31

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.