20200405, 11:23  #166  
Romulan Interpreter
Jun 2011
Thailand
20544_{8} Posts 
Quote:
Also, I am talking solely on the line of GPUs I went over time: gtx580, Titan (classic and black), 1080Ti, 2080Ti. These all cards were always the "good" side (i.e. good FP64 ratio) . Quote:
Quote:
Last fiddled with by LaurV on 20200405 at 11:27 

20200405, 14:06  #167  
If I May
"Chris Halsall"
Sep 2002
Barbados
2×3×1,499 Posts 
Quote:
Thanks to James' analysis, we know that for the more modern cards it is "optimal" to TF to 78 bits before running the FC ***on the same card***. Older cards shouldn't go as high. But this assumes each and every deployed card will switch between the work types (TF'ing vs. FC'ing) when appropriate. In reality, this doesn't happen, so instead, different people will choose different depths to "pledge" to. And some just like TF'ing, so don't really care if they TF past the optimal for their particular card(s). And, since BOINC isn't issuing FC work, this economic crossover analysis doesn't really apply. Now, with regards to P1'ing, again it comes down to resource availability. Because George sets the B1/B2 bounds as a function of the TF'ed depth, it takes /slightly/ less time (with an associated slightly less probability of finding a factor during the run) the higher a candidate has been TF'ed. And, of course, whichever work type does the next step (TF a bit, or P1) and finds a factor helps the other worktype workers, since further work is no longer needed. With regards to how many P1 jobs are done per day, that varies widely. R. Propper can do hundreds a day  I have no idea what his longterm plans are. 

20200405, 15:02  #168  
Jun 2003
3·1,531 Posts 
Quote:
IOW, if your intent is to speed up /only/ the FC (in order to accelerate the next prime find), then you should do 77 bits. 

20200405, 15:32  #169  
If I May
"Chris Halsall"
Sep 2002
Barbados
2·3·1,499 Posts 
Quote:
But, as we all know, James' analysis is an absolute ideal optimization of resources. That simply isn't possible at GIMPS, because of the huge disparity in the completely autonomous (and volunteer) participants. Short of an omniscient and omnipotent entity stepping forward to manage /all/ the resources, we do the best we can as the simple humans we are. Further, the opposite argument to yours can be made. If we have TF'ing resources that aren't going to be redirected to FC'ing, should they optimally TF the ranges years ahead of the wavefronts? Or should they instead TF past the optional only a few months ahead? I would argue the latter, where people are willing. 

20200405, 15:52  #170 
Romulan Interpreter
Jun 2011
Thailand
2^{2}×2,137 Posts 
Everything nice with what you and axn say, except that you point that link to a T4, which is not a "good" card in my definition  it spits out a lot of TF indeed, but it is very weak at FP64. That is more like a "gaming" card, and it should be used mostly for TF. So, you could be right, but  and here is the but: nobody have such card, and nobody uses one, except Colab, and even that, only seldom (for lucky guys ). How many users play with a T4? Guys who can afford Teslas, won't spoil them at TF. The comparisons should be done for 10xx, 16xx, and 20xx cards as most people play those.
Last fiddled with by LaurV on 20200405 at 16:07 
20200405, 16:12  #171  
If I May
"Chris Halsall"
Sep 2002
Barbados
8994_{10} Posts 
Quote:
But, OK... If you want to get into the analysis paralysis domain... It comes down to how many cards of the various Compute Versions are working at what. A Telsa P100 (CV 6.0) should only go to 76, while a GeForce GTX 1080 (CV 6.1) should go to 77, for example. Would you like to take on the task of providing us with an inventory of the currently deployed kit? 

20200405, 16:30  #172  
P90 years forever!
Aug 2002
Yeehaw, FL
6843_{10} Posts 
Quote:


20200405, 16:48  #173  
P90 years forever!
Aug 2002
Yeehaw, FL
1ABB_{16} Posts 
Quote:
Arguing about the optimum bit level is a bit silly. 1) There are a wide variety of GPUs participating each with their own optimum, and 2) If you have an excess of TF resources available, put them to work TF'ing past the theoretical optimum bit level. You have a short term problem. TF resources "fell behind" the firsttime testers and Chris is now juggling these cool new TF resources to best feed the 4 wavefronts and the P1'ers. Your long term solution is easy. Guess where the cat 4 wavefront is a year from now and TF that range as much as the TF resources allow (depthfirst, breadthfirst, probably does not matter much). At some point, we should also look at taking some of the excess TF resources and pointing it at the 100M digit range. 

20200405, 16:53  #174 
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
1F6C_{16} Posts 

20200405, 17:31  #175  
If I May
"Chris Halsall"
Sep 2002
Barbados
2·3·1,499 Posts 
Quote:
Also, the DC range, once the FC ranges are comfortable. 

20200405, 18:45  #176  
Quasi Admin Thing
May 2005
3^{2}·101 Posts 
Quote:
I know it is intriguing to go and throw some ressources at the 100M digit range. It sure is nice to see that some "ancient" code excist to do that job, but untill there is adequate room between the wavefront of TF and the various FC and P1 wavefronts, I would sure appreciate if we wavered that part of TF for the next time being. But of course, as you and George mention, for the future, once the world gets back to normal and our ressources stabilies it sure is a great suggestion 

Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Chess World Championship Match  2013, 2014, 2016  Raman  Chess  34  20161201 01:59 
mprime ETA and primenet "days to go" do not match  blip  Software  1  20151120 16:43 
less v4 reservations being made  tha  PrimeNet  8  20080814 08:26 
LL test doesn't match benchmark  drew  Hardware  12  20080726 03:50 
WE MADE IT!!!!!!!!!!!!!!!!!!!!!!  eric_v  Twin Prime Search  89  20070123 15:33 