mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Data > Marin's Mersenne-aries

Reply
 
Thread Tools
Old 2021-07-02, 03:18   #12
masser
 
masser's Avatar
 
Jul 2003
wear a mask

2×829 Posts
Default

I've grown to like Kriesel's longer posts. Don't forget Ricky Gervais' guitar lessons philosophy. If you don't like it, just move along; it's not for you; that's ok.
masser is online now   Reply With Quote
Old 2021-07-02, 07:27   #13
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

7·1,373 Posts
Default

Quote:
Originally Posted by kriesel View Post
Could we get some more volunteers performing wavefront P-1 to good bounds? <...> A little more help with P-1 please?
Quote:
Originally Posted by LaurV View Post
Edit: also, low category work is not easy to get, I tried and got 106M, 107M, and 110M
Whaaaa! I just got assigned 40 fresh pieces of 104M expos for P-1, by manual assignment.

The gods heard us, and our prayers were answered.
LaurV is offline   Reply With Quote
Old 2021-07-02, 12:35   #14
Zhangrc
 
May 2021

101112 Posts
Default

Quote:
Originally Posted by kriesel View Post
Typically now recommended bounds are ~B1=650,000,B2=24,000,000.
Those are old bounds, calculated in the time when we needed 2 or more primarity tests. Nowadays, we only need to save 1.1 PRP tests (460 GHZDays) at 105M exponents. So take M105000001 for example, If it had been TFed to 2^77, then we take B1=344,401 and B2=8,610,025. (We can also try 40X for P95V305 or later, say B2=13,776,000)
If it had been TFed to 2^76, then we take B1=377,085 and B2=9,427,125.
I recommend doing more TF work. Given SO much GPU computing power, it makes sense to TF 106M to 113M exponents to 2^77. (Currently TF wavefront is at 117M, most of the exponents will not be assigned in this year)
If this method is applied, we can have higher B1 bounds. But we have to assign P-1 together with PRP.

Last fiddled with by Zhangrc on 2021-07-02 at 13:04
Zhangrc is offline   Reply With Quote
Old 2021-07-02, 13:54   #15
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

31×173 Posts
Default

Something else to consider is that for some GPU models already benchmarked extensively, because relative performance is dependent on exponent, and therefore FFT length, running standalone P-1 followed by standalone PRP/GEC/proof in V6.11-3xx is actually FASTER than running v7.2-x combined PRP/P-1. See the last 2 attachments of https://www.mersenneforum.org/showpo...35&postcount=2

I stand by my conclusion that the right bounds for production use are 2-tests-saved, not 1-test-saved, while there is still a mix of PRP-without-proof and PRP-with-proof (and some unfortunate roque PRP-assignment-conversion-to-LL) being performed, and the server's bounds requirements are what they are. We don't know or control which exponents will later get one or another primality test approach, at the earlier time of selecting P-1 bounds and processing them, so aim for adequate coverage for each exponent. There's what the adequate bounds "should be" on the server now, later after conversion to all PRP-with-proof has been completed, and there's what they currently ARE on the server. Discussion of what it should be is fine and useful to a point. And current work's bounds should be compatible with what the server accepts as adequate now.

(See, my posts were actually TOO SHORT! And sometimes there's pushback from moderators that I'm posting too often. For the same total content, posting one long message versus several short posts helps comply with their preferences for low post count. There's also intentionally a bit of redundancy in the posts. Explanation, plus example, may help avoid someone's post asking for clarification, and a clarification reply post. And I don't own a guitar, don't want to borrow or rent one, or spend the time to learn guitar, so why are you trying to sell me GD Fing time consuming lessons?!!s Posts like #6 take about a lesson duration to write and proofread, and may become the basis of a reference post, which takes ~another lesson to do. Usage of and feedback on such posts hopefully helps us play GIMPS GPUs etc. better & more efficiently. I'd rather post on the large side and have a higher chance of any readily identifiable misconceptions identified, than not. Those who can't bear a 2-minute read have the options to skim or skip.)

Last fiddled with by kriesel on 2021-07-02 at 14:38
kriesel is offline   Reply With Quote
Old 2021-07-02, 14:49   #16
slandrum
 
Jan 2021
California

11·13 Posts
Default

If the majority of wavefront is now PRP+proof, then I would argue that the average savings of a PM1 test should be set for a factor lower than 2 tests saved, but higher than one. If you can't know what type of PRP or (shudder) LL test will be run on the exponent in the future, you can at least make a weighted average guess, instead of doing significantly too much PM1 almost all of the time, a happy medium would be to do a little too much most of the time, and a little too little the rest of the time.

Setting it higher than one can also account for tests that get started and never complete and have to get re-assigned.

Last fiddled with by slandrum on 2021-07-02 at 14:50
slandrum is offline   Reply With Quote
Old 2021-07-02, 15:29   #17
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

123638 Posts
Default

Abandoned-PRPs is a consideration I had not included. Thanks for that.

And I note again, what matters is not our opinions of optimal bounds under various assumptions, but the penalty function imposed by the PrimeNet server's threshold for retiring the P-1 task. The cost of having to redo P-1 to sufficient bounds to retire the P-1 task (usually on a different system by a different user with no benefit from the prior P-1 compute time expended), vs. slight inefficiency of a little higher bounds cost & factor probability than optimal, is not symmetric at all.

A little exploring of mersenne.ca indicates B1=650000,B2=24000000 is present in the GPU72 row up to p>105.3M, so manual GPU P-1ers editing in bounds need not check there very often.
Mprime/prime95 do the bounds optimization automatically, and routinely produce yet higher optimized bounds.

The back and forth about what's optimal P-1 strategy has been fun, albeit somewhat a rehash of what's already known and stated elsewhere. But such is not the purpose of this thread.
The thread was created to encourage more participants to help with the P-1 wavefront work. A LITTLE guidance of what parameters to use supports that goal.

Thanks for all the input, and let's get lots of wavefront P-1 done (to at least sufficient bounds, the first time).

If someone would like to take on determining what the current mix is, at the first-primality test wavefront, of PRP without proof, PRP with proof, rogue LL first test, primality test abandonment, server bounds threshold, etc, and produce a credible P-1 cost vs bounds model and calculation for recommended random-wavefront-exponent P-1 #-tests-saved value, go for it.

Last fiddled with by kriesel on 2021-07-02 at 15:41
kriesel is offline   Reply With Quote
Old 2021-07-02, 16:40   #18
axn
 
axn's Avatar
 
Jun 2003

5,051 Posts
Default

Quote:
Originally Posted by kriesel View Post
I stand by my conclusion that the right bounds for production use are 2-tests-saved, not 1-test-saved, while there is still a mix of PRP-without-proof and PRP-with-proof
<snip>
We don't know or control which exponents will later get one or another primality test approach, at the earlier time of selecting P-1 bounds and processing them, so aim for adequate coverage for each exponent.
Massive statistics fail. If there is a mix, then you should compute the average tests saved and use that (eg:- if there is 50% of each category, use 1.5. 90% prp + 10% ll = 1*.9 + 2*.1 = 1.1, etc. This is optimal.

Quote:
Originally Posted by kriesel View Post
There's what the adequate bounds "should be" on the server now, later after conversion to all PRP-with-proof has been completed, and there's what they currently ARE on the server. Discussion of what it should be is fine and useful to a point. And current work's bounds should be compatible with what the server accepts as adequate now.
This is the first I'm hearing about a P-1 bounds requirement on server. Is this something you've been told or empirically observed? I've been under the impression that as long as /some/ P-1 was done, the server would stop handing out further P-1. If that is not the case, then it is utter non-sense.

Quote:
Originally Posted by kriesel View Post
(See, my posts were actually TOO SHORT!
<skip>
Those who can't bear a 2-minute read have the options to skim or skip.)
And you couldn't help but write a paragraph with meta commenatary that was bigger than your content! And didn't realize that this is exactly the kind of wordiness that should be minimized?
Sure, we can skip your wordy thing, but next time you admonish someone for not reading your (wordy) reference material, STFU.
axn is offline   Reply With Quote
Old 2021-07-02, 18:40   #19
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

31×173 Posts
Default

Quote:
Originally Posted by axn View Post
Massive statistics fail. If there is a mix, then you should compute the average tests saved and use that (eg:- if there is 50% of each category, use 1.5. 90% prp + 10% ll = 1*.9 + 2*.1 = 1.1, etc. This is optimal.


This is the first I'm hearing about a P-1 bounds requirement on server. Is this something you've been told or empirically observed? I've been under the impression that as long as /some/ P-1 was done, the server would stop handing out further P-1. If that is not the case, then it is utter non-sense.


And you couldn't help but write a paragraph with meta commentary that was bigger than your content! And didn't realize that this is exactly the kind of wordiness that should be minimized?
Sure, we can skip your wordy thing, but next time you admonish someone for not reading your (wordy) reference material, STFU.
I don't set PrimeNet server policy. (And not interested in that responsibility.)

Simple averaging works when the function is linear. Otherwise some weighting factors are needed at least.
Or if I'm wrong, show the math please. With consideration of the P-1 bounds threshold effect as below.

I asked George about what it takes to retire the P-1 assignment, a while ago, on a public thread IIRC, and he replied IIRC confirming that inadequate bounds did not retire the P-1 task for the exponent, so another P-1 assignment may issue, or the exponent's eventual primality test assignment would issue with p-1_done = 0 at the end. That is, the PRP tester will be told to do P-1. And will almost certainly not have access to any save files from any previous P-1 attempts, so starts from scratch. As I understood it then, the number issued in all P-1 assignments was left at 2 tests saved, not 1, not 1.x. Primality tests may be told by assignment structure (Test vs. DoubleCheck, but not PRP vs. PRPDC with or without proof) an estimate whether one or two tests may be saved by finding a P-1 factor., ignoring small effect of test or check errors that might necessitate additional tests. (Used search in forum and PM, can't find it now. Did find an old PM from before proof/cert introduction, saying assume all LL and PRP tests will be double-checked.)
I would welcome an update from George or some other reasonably authoritative source, on what the server currently considers adequate bounds. Also from anyone a credible first-test mix analysis (PRP-proof, PRP-no-proof, bad-proof, abandon losses, LL)

Hmm, humor attempt rejected, with prejudice?

Last fiddled with by kriesel on 2021-07-02 at 19:30
kriesel is offline   Reply With Quote
Old 2021-07-02, 19:48   #20
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

31×173 Posts
Default

Excerpts from https://www.mersenneforum.org/showpo...8&postcount=22 (and content was vetted somewhat by PM Q&A with authors, feedback from users, my own testing, etc.):
p-1_done = 1 if done to adequate bounds, 0 if not done already to adequate bounds
tests_saved = integer number of future primality tests saved if a factor is found, usually 2 for a first test candidate, 1 for a double-check candidate, 0 if a sufficient P-1 has already been completed, or optionally up to 9 for aggressive P-1 factoring

For prime95/mprime:
LL
Test=[<AID>,|N/A,|<nul>]<exponent>,<how_far_factored>,<p-1_done>
Doublecheck=[<AID>,|N/A,|<nul>]<exponent>,<how_far_factored>,<p-1_done>

PRP (and PRP DC for manual assignments, or most versions)
PRP=[<AID>,|N/A,|<nul>]<k>,<b>,<n>,<c>[,<how_far_factored>,<tests_saved>[,<prp_base>,<residue_type>[,"comma-separated-list-of-known-factors"]]]

P-1
Pfactor=[<AID>,|N/A,|<nul>]<k>,<b>,<n>,<c>,<how_far_factored>,<tests_saved>


In-the-wild examples:
Server-manualpage-issued P-1 assignment example:
PFactor=(AID),1,2,104281559,-1,76,2

PrimeNet-issued LLDC assignment example:
DoubleCheck=(AID),57165769,74,1

Server-manualpage-issued PRP assignment example:
PRP=(AID),1,2,730000031,-1,84,2
PrimeNet-issued PRP assignment example:
PRP=(AID),1,2,104302727,-1,76,0,3,1
kriesel is offline   Reply With Quote
Old 2021-07-03, 11:01   #21
Zhangrc
 
May 2021

23 Posts
Default

Quote:
Originally Posted by kriesel View Post
running standalone P-1 followed by standalone PRP/GEC/proof in V6.11-3xx is actually FASTER than running v7.2-x combined PRP/P-1.
I have the impression that GPUOWL uses B1=5,000,000 with no stage 2. That's way too HUGE. Using B1=500,000 with 40x stage 2 (with P95V305 optimizations) will make more sense.
Zhangrc is offline   Reply With Quote
Old 2021-07-03, 14:12   #22
kriesel
 
kriesel's Avatar
 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest

31·173 Posts
Default

Quote:
Originally Posted by Zhangrc View Post
I have the impression that GPUOWL uses B1=5,000,000 with no stage 2.
I believe based on running dozens of (v6.11-3xx standalone) Gpuowl P-1 daily your impression is completely unsupported by my experience on thousands of wavefront or higher exponents, multiple gpuowl versions, multiple GPU models, and wide range of exponents. I don't recall EVER seeing gpuowl select B1 ~5M, no B2, on wavefront exponents, GPUs from 16GiB to 2, nor skip stage 2, unless it found a factor in stage 1 so no need for stage 2, which is proper behavior also shown in other applications. I mostly use v6.11-380 or -364, but have some test experience with the other P-1 capable versions (5.0, 7.x)
Typical gpuowl v6.11-3xx default behavior on wavefront exponents is B1=1M, B2-30M.
Perhaps gpuowl v7.x is optimized to select higher B1 since its stage 1 is so low cost.
Anyone who dislikes what gpuowl selects is free to explicitly specify bounds for it to run; prepend B1=500000,B2=20000000; or whatever preferred bounds on the PFactor (v6.11) or PRP (V7.x) worktodo line. I currently do similarly B1=650000,B2=24000000; for production P-1, after having used the default 1M,30M extensively.

What's your impression based on?

Last fiddled with by kriesel on 2021-07-03 at 14:41
kriesel is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
COVID vaccination wavefront Batalov Science & Technology 227 2021-06-14 22:12
Received P-1 assignment ahead of wavefront? ixfd64 PrimeNet 1 2019-03-06 22:31
Call for GPU Workers to help at the "LL Wavefront" chalsall GPU Computing 24 2015-07-11 17:48
P-1 & LL wavefront slowed down? otutusaus PrimeNet 159 2013-12-17 09:13
Production of Dates heich1 Information & Answers 35 2011-12-02 01:12

All times are UTC. The time now is 00:34.


Sat Jul 17 00:34:33 UTC 2021 up 49 days, 22:21, 1 user, load averages: 0.78, 1.07, 1.30

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.