20190109, 16:03  #12  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7×491 Posts 
Quote:
I forgot to mention, I didn't find good candidates at 103, 123, 133, or 143 in the first 1000 spans. But any assignment in the neighborhood would do. And if they are already TF to target, there's no TF to delegate before doing a P1. Last fiddled with by kriesel on 20190109 at 16:49 

20190109, 16:34  #13  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7·491 Posts 
Quote:
My impression is the difference between primenet TF target and GPUto72 target there, is a difference of 4 bits. For example, https://www.mersenne.ca/exponent/453000013 7369=4 or https://www.mersenne.ca/exponent/53000039 8278=4 or https://www.mersenne.ca/exponent/53000039 8581=4. It's about 3 bits per exponent doubling (less than 3 at high values). Since the GPUto72 project is apparently successfully keeping cpus off TF, and doing little P1, and there's active interest in P1 and primality testing on NVIDIA and AMD, and I'm running a lot of P1 tests in CUDAPm1, going to the higher GPUto72 TF levels seems to me to make sense. The distinction that the optimal bit level might shift depending on _which_ gpu model is concerned is a useful one. I haven't wrestled sufficiently with the question of where an optimal lies or what optimal means or how many dimensions an optimaldescription may have, when considering multiple work types on multiple models of gpus and cpus. I have the impression the GIMPS community "jury is still out on that one". The practical difference between the simpler single TF level expressed as GPU72 level on James' excellent useful site and the ideal optimal for any combination of cpu or gpu0 primality test, gpu1 P1, gpu2 TF is probably small in percentage throughput terms. For owners of gpus that are in some way not typical, such as RTX20xx or Intel igps where the TF/LL throughput ratio is significantly higher, or the really old NVIDIA cards where the SP/DP ratio is significantly lower, relying on the gpumodelspecific curves is likely more important. Last fiddled with by kriesel on 20190109 at 16:38 

20190109, 18:05  #14  
"/X\(‘‘)/X\"
Jan 2013
Ͳօɾօղէօ
2^{2}×5×139 Posts 
Quote:
In reality, I'm too lazy to switch software. So I TF a little higher than optimal on the 580s. 

20190109, 18:11  #15 
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7·491 Posts 
I used typical, in the sense of this gpu model is like or unlike other models.
I agree that they will become popular / common, as measured in units sold over time, and percentage of the deployed fleet. There's no need to apologize for or defend time management. Thanks for the 90M point of reference for bit levels on the various models. Last fiddled with by kriesel on 20190109 at 18:13 
20190109, 22:41  #16  
If I May
"Chris Halsall"
Sep 2002
Barbados
8829_{10} Posts 
Quote:
Those who have been in the "game" a while know that making a decision and then moving forward is often better than overthinking, and never moving. Yes, mistakes might be made, but one tends to learn from mistakes.... 

20190109, 23:40  #17  
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
7·491 Posts 
Quote:
Plan, do, observe, adjust. DO is essential. Henry Ford didn't build an optimal automobile from the start, he fed an engine gas by teaspoon initially. 

20190110, 04:30  #18 
Jun 2005
USA, IL
193 Posts 
I had grabbed all the 152 bin candidates. I'll add individual exponents with more variety as those finish over the next weekish.

20190116, 03:46  #19 
Jun 2005
USA, IL
193 Posts 
I've grabbed a couple more exponents, but I'm curious on your expectations for some that the Primenet server won't hand out. An exponent like 100,000,471 is only trial factored up to 73 bits, but already has P1 and two matching LL tests done. I assume that's why it can no longer be reserved for more work thru the manual gpu assignment page. Is that an exponent you would still want taken up to 76 bits, or not since it's already confirmed composite?
Edit: nevermind, I see you indicated "without P1 result or primality result" in the original request Last fiddled with by potonono on 20190116 at 03:51 
20190203, 15:19  #20 
Oct 2018
2^{2}×3 Posts 
I'll donate some time as I'm finishing up a round of GPU72 DCTF. I've reserved the following from the 121 bin. Let me know if this makes sense.
Factor=N/A,121100117,72,77 Factor=N/A,121100171,72,77 Factor=N/A,121100219,72,77 Factor=N/A,121100233,72,77 Factor=N/A,121100269,72,77 Factor=N/A,121100351,72,77 Factor=N/A,121100383,72,77 Factor=N/A,121100407,72,77 Factor=N/A,121100411,72,77 Also, possibly not relevant, but I did recently take M421000049 to 82 bits (just exploring the much higher bit ranges...) 
20190208, 20:10  #21 
Oct 2018
2^{2}·3 Posts 
Reserved a few exponents in the 123 bin.
Code:
123449987,72,77 123449939,72,77 123449917,72,77 123449819,72,77 123449791,72,77 123449743,72,77 123449737,72,77 123449663,72,77 123449611,72,77 123449591,72,77 
20190307, 05:35  #22 
Oct 2018
12_{10} Posts 
Just reserved these in the 353 bin. Think I'll be good for a while in TF.
Code:
353000059,72,81 353000177,72,81 353000071,72,81 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
What does Glib Deepak have to do with deep doodoo?  cheesehead  Science & Technology  47  20141214 13:45 
Deep Hash  diep  Math  5  20121005 17:44 
Question on going deep and using cores  MercPrime  Software  22  20090113 20:10 
Deep Sieving 10m Digit Candidates  lavalamp  Open Projects  53  20081201 03:59 
NASA's Deep Impact...  ixfd64  Lounge  5  20050706 13:46 