![]() |
|
|
#12 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2×7×383 Posts |
Quote:
I forgot to mention, I didn't find good candidates at 103, 123, 133, or 143 in the first 1000 spans. But any assignment in the neighborhood would do. And if they are already TF to target, there's no TF to delegate before doing a P-1. Last fiddled with by kriesel on 2019-01-09 at 16:49 |
|
|
|
|
|
|
#13 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2·7·383 Posts |
Quote:
My impression is the difference between primenet TF target and GPUto72 target there, is a difference of 4 bits. For example, https://www.mersenne.ca/exponent/453000013 73-69=4 or https://www.mersenne.ca/exponent/53000039 82-78=4 or https://www.mersenne.ca/exponent/53000039 85-81=4. It's about 3 bits per exponent doubling (less than 3 at high values). Since the GPUto72 project is apparently successfully keeping cpus off TF, and doing little P-1, and there's active interest in P-1 and primality testing on NVIDIA and AMD, and I'm running a lot of P-1 tests in CUDAPm1, going to the higher GPUto72 TF levels seems to me to make sense. The distinction that the optimal bit level might shift depending on _which_ gpu model is concerned is a useful one. I haven't wrestled sufficiently with the question of where an optimal lies or what optimal means or how many dimensions an optimal-description may have, when considering multiple work types on multiple models of gpus and cpus. I have the impression the GIMPS community "jury is still out on that one". The practical difference between the simpler single TF level expressed as GPU72 level on James' excellent useful site and the ideal optimal for any combination of cpu or gpu0 primality test, gpu1 P-1, gpu2 TF is probably small in percentage throughput terms. For owners of gpus that are in some way not typical, such as RTX20xx or Intel igps where the TF/LL throughput ratio is significantly higher, or the really old NVIDIA cards where the SP/DP ratio is significantly lower, relying on the gpu-model-specific curves is likely more important. Last fiddled with by kriesel on 2019-01-09 at 16:38 |
|
|
|
|
|
|
#14 | |
|
"/X\(‘-‘)/X\"
Jan 2013
22×733 Posts |
Quote:
In reality, I'm too lazy to switch software. So I TF a little higher than optimal on the 580s. |
|
|
|
|
|
|
#15 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
123628 Posts |
I used typical, in the sense of this gpu model is like or unlike other models.
I agree that they will become popular / common, as measured in units sold over time, and percentage of the deployed fleet. There's no need to apologize for or defend time management. Thanks for the 90M point of reference for bit levels on the various models. Last fiddled with by kriesel on 2019-01-09 at 18:13 |
|
|
|
|
|
#16 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·5·7·139 Posts |
Quote:
Those who have been in the "game" a while know that making a decision and then moving forward is often better than over-thinking, and never moving. Yes, mistakes might be made, but one tends to learn from mistakes.... |
|
|
|
|
|
|
#17 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
2·7·383 Posts |
Quote:
Plan, do, observe, adjust. DO is essential. Henry Ford didn't build an optimal automobile from the start, he fed an engine gas by teaspoon initially. |
|
|
|
|
|
|
#18 |
|
Jun 2005
USA, IL
193 Posts |
I had grabbed all the 152 bin candidates. I'll add individual exponents with more variety as those finish over the next week-ish.
|
|
|
|
|
|
#19 |
|
Jun 2005
USA, IL
193 Posts |
I've grabbed a couple more exponents, but I'm curious on your expectations for some that the Primenet server won't hand out. An exponent like 100,000,471 is only trial factored up to 73 bits, but already has P1 and two matching LL tests done. I assume that's why it can no longer be reserved for more work thru the manual gpu assignment page. Is that an exponent you would still want taken up to 76 bits, or not since it's already confirmed composite?
Edit: nevermind, I see you indicated "without P-1 result or primality result" in the original request Last fiddled with by potonono on 2019-01-16 at 03:51 |
|
|
|
|
|
#20 |
|
Oct 2018
1110 Posts |
I'll donate some time as I'm finishing up a round of GPU72 DCTF. I've reserved the following from the 121 bin. Let me know if this makes sense.
Factor=N/A,121100117,72,77 Factor=N/A,121100171,72,77 Factor=N/A,121100219,72,77 Factor=N/A,121100233,72,77 Factor=N/A,121100269,72,77 Factor=N/A,121100351,72,77 Factor=N/A,121100383,72,77 Factor=N/A,121100407,72,77 Factor=N/A,121100411,72,77 Also, possibly not relevant, but I did recently take M421000049 to 82 bits (just exploring the much higher bit ranges...) |
|
|
|
|
|
#21 |
|
Oct 2018
B16 Posts |
Reserved a few exponents in the 123 bin.
Code:
123449987,72,77 123449939,72,77 123449917,72,77 123449819,72,77 123449791,72,77 123449743,72,77 123449737,72,77 123449663,72,77 123449611,72,77 123449591,72,77 |
|
|
|
|
|
#22 |
|
Oct 2018
138 Posts |
Just reserved these in the 353 bin. Think I'll be good for a while in TF.
Code:
353000059,72,81 353000177,72,81 353000071,72,81 |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| What does Glib Deepak have to do with deep doo-doo? | cheesehead | Science & Technology | 47 | 2014-12-14 13:45 |
| Deep Hash | diep | Math | 5 | 2012-10-05 17:44 |
| Question on going deep and using cores | MercPrime | Software | 22 | 2009-01-13 20:10 |
| Deep Sieving 10m Digit Candidates | lavalamp | Open Projects | 53 | 2008-12-01 03:59 |
| NASA's Deep Impact... | ixfd64 | Lounge | 5 | 2005-07-06 13:46 |