![]() |
[QUOTE=chalsall;536309]That isn't happening. And this is GIMPS, not GIMFS.[/QUOTE]I took your post [URL]https://www.mersenneforum.org/showpost.php?p=534304&postcount=4550[/URL] to mean, first time testing is outrunning the project's ability to optimally TF and P-1, and you were asking for input on how to cope. Was that wrong?
If Ben Delo chose to, he could probably cause double checking to complete to M57885161 by Christmas without reducing his rate of first primality test completion by half. And reduce the DC backlog by years, and ease the pressure on TF and P-1 in the bargain. Come to think of it, his industrial grade kit would probably be quite good at P-1, which is relatively lacking in error detection and correction as an algorithm as implemented, and apparently falling behind. He could also drop a few grand on RTX2080s, set up MISFIT, pay the utility bill for them, and improve the situation in TF a bit. [QUOTE=chalsall;536300] Lastly, while I appreciate that ~3 THzD/D is impressive, please note that for the last month GPU72's participants have averaged a total of ~300 THzD/D. I would argue that it's better to keep the disciplined targeted firepower working the way it is now. And, again, work in the 10xMs (ideally to 76 or 77) is needed right now.[/QUOTE]Soon to be 6ThzD/d, and later ~8. But of course, a small component of the whole is just that; a small percentage of the total. And my little effort is not GPU72, and it is disciplined and as targeted as GPU72 laying claim to massive amounts of exponents at the wavefront allows it to be. I do appreciate the description of the juggling act for the various categories. Simple it ain't. |
[QUOTE=Uncwilly;536315][B][U]Why are you not using Misfit?[/U][/B] Or a home brewed script to hand this?:bangheadonwall:
Would 2 months be ok with you? or 3 or 4? Assignment recycling is important. Old TF assignments should be recycled ahead of the first time LL wave in enough time that they can all get done. Ben may discourage some (since he depresses the chance of them finding a prime). But, overall there is more total throughput. And total number of users does fall in the months after the spike around a new prime discovery.[/QUOTE] My understanding is MISFIT only does TF. MISFIT as I recall requires MS Forms and .net on every system it's installed on. (Not so easy for linux users.) Nothing does CUDAPm1. No script that's been released does all the gpu apps I run. I'm running a mix of everything I feel forwards the wavefronts toward discovery. TF, P-1, LLDC, PRP DC, PRP first test, LL first test, software QA, runtime scaling and limits probing, tuning tests and documentation, etc. I have a homebrew script that does the big 6 gpu apps, to the extent of analyzing logs, determining active or dormant/hung/stopped status, gathering new results into one file per system, computing remaining worktodo in days of gpu throughput per app instance, etc. I haven't added get-work automation to the script yet; that's probably next or near it. I recently added a user-triggered-only self-update-from-file-share function (which is the 1% of it that is not OS-independent, only implemented for Win32 so far). I work on extending it now and then. It's perl compiled to an executable with no requirement for installing anything else on a system it runs on. It's been almost 2 years in the works and feels like nearing ready for release (months). I alternate among writing code, debugging, testing, and documenting, and many other unrelated tasks. Sometimes I write the documentation first and code to that. I'd probably be fine with 3 months TF expiration. And not assigning first-tests until optimally TF and P-1 complete. Let cpu first-test fall back to DC assignments and P-1. Not really my call though. |
[QUOTE=chalsall;536311]This is a policy decision made by George, further exasperated by the fact that TF and P-1 assignments are not constrained by the LL/DC assignment rules.[/QUOTE]The issues are exacerbated, and so the participants may be exasperated.:smile::beer2:
Rules can be changed. |
We are with axn here, if the man has resources and likes to LL or PRP, then he should LL or PRP. You won't tell me what to do with my rig, and the goal of the project is finding primes. Why the hack do we have the same discussion two times every year? :razz:
|
[QUOTE=kriesel;536316]I took your post [URL]https://www.mersenneforum.org/showpost.php?p=534304&postcount=4550[/URL] to mean, first time testing is outrunning the project's ability to optimally TF and P-1, and you were asking for input on how to cope. Was that wrong?[/QUOTE]
It was accurate at the time. However, some "big guns" stepped up to help readjust to the new realities. Oliver alone dumped 6.2 PHzD of work (!) in the last month. LaurV brought ~8 THzD/D of compute back to bear to help chase ahead of Cats 3 and 4 to 75 bits. And several others have stepped up their game as well. I'll have some time to run the numbers again this weekend to see how we're looking, but there's a chance we'll be able to keep going to 77 for Cats 2 and below for the foreseeable future. |
[QUOTE=kriesel;536313]...I'm personally running over 30 manually queued and reported gpu application instances...[/QUOTE]
I can see where you might want to keep all these going. I only have [U]one[/U] now, which I use sparingly. When I do run it, I reduce it to 80% capacity. It runs cooler this way, and there is nearly no impact on throughput. Still over 1,000 Ghz-d/day. |
[QUOTE=kriesel;536318]My understanding is MISFIT only does TF. MISFIT as I recall requires MS Forms and .net on every system it's installed on. (Not so easy for linux users.) Nothing does CUDAPm1. No script that's been released does all the gpu apps I run.[/QUOTE]
Have you looked at Mark Rose's [URL="https://github.com/MarkRose/primetools"]primetools[/URL] codebase? I don't use it currently myself, but I have in the past. Reliable. If it doesn't do everything you want already, it would probably be a good base to build upon. |
[QUOTE=chalsall;536326]It was accurate at the time. However, some "big guns" stepped up to help readjust to the new realities. Oliver alone dumped 6.2 PHzD of work (!) in the last month. LaurV brought ~8 THzD/D of compute back to bear to help chase ahead of Cats 3 and 4 to 75 bits. And several others have stepped up their game as well.
I'll have some time to run the numbers again this weekend to see how we're looking, but there's a chance we'll be able to keep going to 77 for Cats 2 and below for the foreseeable future.[/QUOTE]Wow, excellent, good to see the group respond effectively to the challenge, thanks for the update. [QUOTE=chalsall;536330]Have you looked at Mark Rose's [URL="https://github.com/MarkRose/primetools"]primetools[/URL] codebase? ... If it doesn't do everything you want already, it would probably be a good base to build upon.[/QUOTE]Yes, I've looked at it, and summarized its feature set in a table with the other similar stuff I could find. My impression is its feature set is a small subset of what I'm aiming for. Also I don't know Python. Reprogramming vintage wetware is slow. [QUOTE=LaurV;536322]You won't tell me what to do with my rig... Why the hack do we have the same discussion two times every year? :razz:[/QUOTE]Wouldn't dream of trying, LaurV. But some things need to be discussed now and then. If nothing else, it improves someone's understanding of how things are. In this case, definitely including mine. Things changed a lot this month since Chalsall's Jan 5 post. |
[QUOTE=LaurV;536322]We are with axn here, if the man has resources and likes to LL or PRP, then he should LL or PRP. You won't tell me what to do with my rig, and the goal of the project is finding primes. Why the hack do we have the same discussion two times every year? :razz:[/QUOTE]
+1 It's his firepower... he can do whatever the hell he wants with it, which at the moment seems to be prime hunting... If he wants advice or opinions about something, he'll ask for it most likely. Something I try to live by: Just because it's your opinion doesn't automatically make it fact. |
LL vs. LLDC time gap trend over time
At one time DC got done within the hardware lifetime of the system that did the first test.
Flaky systems got identified and corrective action could be taken regarding all their output in a timely manner (early triple check), and also the user could be notified their still-producing system was unreliable and any hardware issues addressed and output reliability improved. Over the past 20 years, the gap between first test and double check has grown drastically, from under two years, to nearly a decade, longer than typical hardware lifetime. Recent trends are toward more primality testing, less DC, so the gap will worsen (is worsening). I think the required DC rate should be increased to get this large and growing lag under control. Method used to gauge gap was to view a 1000-range of exponent beginning at the indicated base value, and find the longest time gap within that sample interval. base LL year DC time gap years 3M (most date data missing) 4M 1998 1.5 (some first-test dates missing) 5M 1998 2.2 10M 2000 3.7 30M 2005 6.9 50M 2010 8.6 >50M tbd |
[QUOTE=kriesel;536381]Flaky systems got identified and corrective action could be taken regarding all their output in a timely manner (early triple check), and also the user could be notified their still-producing system was unreliable and any hardware issues addressed and output reliability improved.[/QUOTE]
You've made an excellent argument to stop LL and only hand out PRP. :smile: |
| All times are UTC. The time now is 23:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.