![]() |
|
|
#4599 | |
|
"Mr. Meeseeks"
Jan 2012
California, USA
23×271 Posts |
Quote:
|
|
|
|
|
|
|
#4600 | |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
5,419 Posts |
Quote:
I'm working through the following on my little fleet to help out a bit: 1) Thoroughly tuning mfaktc and upgrading to 2047Mib-capable-gpusievesize and tuning again for that on most of my gpus; squeezing out up to 10% more from existing gear, and almost all running multiple mfaktc instances in parallel for the last additional bit of throughput; they can be driven to 100% indicated gpu load in gpu-z or nvidia-smi (benchmarking results for several models were posted in the mfaktc thread); any gpu model over ~100GhzD/day seems to benefit a little from multiple TF instances. 2) Reactivating some older gpus I had lying idle, now that I have an open-frame 6-PCIE up and running mostly (Asrock H81 Pro BTC 2.0, lowly 8GB ram single-DIMM i7-4790, also running prime95 PRPDC and only using a third of the system ram), nice big high efficiency PS. IGP refuses to take an OpenCL driver, and a couple of old gpus are not starting up currently. And ample cooling in the form of winter weather. 3) Diverting short term a GTX1080Ti from another use to TF (3 instances in parallel after 2047-capable and serious tuning gets it to 99-100% gpu load) which takes its throughput to ~1.4ThzD/day, and shifting lesser gpus to TF also; 4) Getting ready for more incoming hardware. 5) Popping the covers off some old gpus to remove the dust/lint/felt buildup after the fan; even fixed-clock old Quadros have some sort of thermal protection, perhaps shutting down some cores. One was so clogged I think it was interfering with the fan rotor and had been removed from service, and is now after a cleaning, back in the fray. 6) Further development of my own multi-gpu-app management program (makes monitoring status and collecting results easier and more efficient, especially important when running 2 and 3 TF instances per gpu in a system) This combined TF throughput is mostly just softening up the low end of the 100M bin a bit, outside of the GPU72 flow. Lately manual TF assignments direct from mersenne.org "lowest exponents" have dropped off from 75/76 bit assignments, to recently as low as 72/73. Occasionally I would get some 95M before GPU72 scooped them up again, but that hasn't happened in a while. I have some Quadro 2000s slogging through some 95M 75/76 at ~one a day each! For the faster newer gpus, if mfaktc and mfakto were modified a bit more, to raise the max gpusievesize above 2047Mib (currently using a signed 32-bit variable to compute bit address) to perhaps 4095Mib (unsigned 32-bit), there appears to be a bit more gain yet to be had there; at least for GTX1080Ti and up. It's likely to matter more as faster gpus come out, judging by tests from a wide variety of gpu speeds. A percent here and there, times how many gpus? Probably the equivalent of adding whole gpus to the project. Perhaps Ben could shift some of his horsepower from first primality tests to LLDC and PRPDC. Even 10% would help those a lot; 20+% to LLDC would be better, as it's lagging several years behind. Last fiddled with by kriesel on 2020-01-31 at 06:09 |
|
|
|
|
|
|
#4601 | |
|
Jun 2003
508210 Posts |
Quote:
Y'all are putting the cart before the horse. TF is supposed to help the project by accelerating the LL/PRP wavefront; and now that somebody has deployed LL resources to do just that, you want them to slow down?! Just do TF 1 or 2 bits less than optimal and call it a day. That last bit has very negligible impact on project thruput compared to the previous bits. It will be a crying shame if, in the pursuit of mathematical optimality, you're letting many undersieved exponents thru to P-1 and Cat 3/4 testers. |
|
|
|
|
|
|
#4602 |
|
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
22·3·17·23 Posts |
|
|
|
|
|
|
#4603 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
976710 Posts |
Quote:
GPU72 very carefully targets its resources to "feed" the various wavefronts optimally, including Cats 3 and 4 to at least 75 bits and the P-1'ers to 77. Most of the Cat 3 and 4 assignments will be recycled, and can then be brought up to 77 before being given as a Cat 2 or lower. Please note that Cat 3 and 4 are already in the 10xM ranges, and Cat 2 is about to enter there. So any work being done there will be "useful" quite quickly (particularly considering George's new assignment sort on TF depth clause). Lastly, while I appreciate that ~3 THzD/D is impressive, please note that for the last month GPU72's participants have averaged a total of ~300 THzD/D. I would argue that it's better to keep the disciplined targeted firepower working the way it is now. And, again, work in the 10xMs (ideally to 76 or 77) is needed right now. Last fiddled with by chalsall on 2020-01-31 at 11:55 Reason: Smelling mistake. |
|
|
|
|
|
|
#4604 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
5,419 Posts |
Balance is good. The lag between first-test and DC is around 8 years. If a mix of effort is LL 80% DC 20%, the lag will hold about constant. I feel reducing the lag would be good. The lag has been growing, before Delo joined; ten years ago the lag was about 6 years; 20 years ago, only about 3 years.
Outrunning the collective TF effort so some of the first-time primality testing is wasted on factorable candidates does not occur to me as an ideal plan. If one very well funded user does essentially all of the first-primality testing as a percentage, he gets essentially all the probability of the next prime discovery, and does not contribute in other areas too, other participants may begin to question whether their TF, P-1, and DC in support of that is worth their time and money. Participation is dropping. It used to be over 7000 with results in the past year; now it's below 6200 and seems to be steadily declining. Propper is listed in top producers as ~98% ECM 1%DC, 1% other. Delo is 99% first-primality, 1% DC 0% everything else. Not exactly balanced. All contributions are welcome. But the heaviest hitters are encouraged to consider how their choice of mix may affect the project, including other participants' responses. Last fiddled with by kriesel on 2020-01-31 at 14:27 |
|
|
|
|
|
#4605 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
230478 Posts |
|
|
|
|
|
|
#4606 | |
|
Random Account
Aug 2009
22×3×163 Posts |
Quote:
I believe many will TF to whatever level they feel is practical. Personally, I do not take on anything above 2^75. It is the time spent versus chance of finding a factor, or not finding one. 75's take an hour on my hardware in the 98M to 100M area. Anything beyond, that is 20xx territory. |
|
|
|
|
|
|
#4607 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
230478 Posts |
Quote:
And that's perfectly fine. Your kit; your choice. |
|
|
|
|
|
|
#4608 |
|
"TF79LL86GIMPS96gpu17"
Mar 2017
US midwest
5,419 Posts |
Not in my opinion. I'm personally running over 30 manually queued and reported gpu application instances, and that's climbing over time as I add hardware and add instances per gpu for greater throughput. (Working on automating managing that small but growing herd.) Ten TF assignments 75/76 queued on a slow (Quadro2000) gpu is 11 days to complete. I reserve assignments in blocks of 10 or more per gpu instance, and try to avoid them ever running dry, so latency is likely to be more than two weeks; months occasionally is not out of the question. I do my best to avoid expiration.
People do go on long vacations sometimes, or business travel, or get sick or injured, or have a term paper due or exam coming up, also. Last fiddled with by kriesel on 2020-01-31 at 15:12 |
|
|
|
|
|
#4609 | |
|
6809 > 6502
"""""""""""""""""""
Aug 2003
101×103 Posts
2·4,909 Posts |
Quote:
![]() Would 2 months be ok with you? or 3 or 4? Assignment recycling is important. Old TF assignments should be recycled ahead of the first time LL wave in enough time that they can all get done. Ben may discourage some (since he depresses the chance of them finding a prime). But, overall there is more total throughput. And total number of users does fall in the months after the spike around a new prime discovery. Last fiddled with by Uncwilly on 2020-01-31 at 15:31 |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Status | Primeinator | Operation Billion Digits | 5 | 2011-12-06 02:35 |
| 62 bit status | 1997rj7 | Lone Mersenne Hunters | 27 | 2008-09-29 13:52 |
| OBD Status | Uncwilly | Operation Billion Digits | 22 | 2005-10-25 14:05 |
| 1-2M LLR status | paulunderwood | 3*2^n-1 Search | 2 | 2005-03-13 17:03 |
| Status of 26.0M - 26.5M | 1997rj7 | Lone Mersenne Hunters | 25 | 2004-06-18 16:46 |