![]() |
|
|
#2663 |
|
May 2013
East. Always East.
172710 Posts |
Hmmm I never thought about P-1 having already been done on these exponents. Is it possible the bounds weren't optimal, given that memory was more scarce back when the 30M was being LL'ed for the first time?
|
|
|
|
|
|
#2664 |
|
"Jerry"
Nov 2011
Vancouver, WA
46316 Posts |
Many times it was not done properly.
However, dubslow and I once did P-1 on the same exponent. He found a Brent-Suyama factor which I did not find with different memory settings. I ran it again with his same settings and found it... so what is 'properly'? Last fiddled with by flashjh on 2014-02-07 at 11:36 |
|
|
|
|
|
#2665 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2×5×7×139 Posts |
Quote:
![]() We are not releasing back to Primenet anything above 33M TF'ed to below 71 for any of the three assignment classes. As far as "charts", comparing the Primenet assignment summary with the GPU72 Current Trial Factoring Depth report should give you a good idea of our situation. Executive summary: we've got a comfortable buffer, and are building on this. I'd like to spend another week or so doing what we're doing, and then we can start to back off on the DC-TF'ing, and move most of our resources back to LL-TF'ing. (Except you LaurV!!! We've got a deal!!! )
|
|
|
|
|
|
|
#2666 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·5·7·139 Posts |
Quote:
And, keep in mind that LL'ing gets more expensive the larger the candidate, the TF'ing get's less expensive. So, while I might agree with you that we might be just at the borderline (or even behind) going to 71 in all of 33M, doing so in 34M and above is almost certainly profitable. (Disclaimer -- this is just my gut feeling, I haven't done an extensive analysis on this, as it will also be a function of each card's "Compute Capacity" and throughput for TF'ing vs. LL'ing. And, frankly, it's just easier making the depth transition be based on 1M ranges.) Last fiddled with by chalsall on 2014-02-07 at 16:33 Reason: Smelling mistake. |
|
|
|
|
|
|
#2667 | |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
230028 Posts |
Quote:
Just let me know....
|
|
|
|
|
|
|
#2668 | |
|
May 2013
East. Always East.
11·157 Posts |
Quote:
Last fiddled with by TheMawn on 2014-02-07 at 17:46 |
|
|
|
|
|
|
#2669 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·5·7·139 Posts |
Yes.
And thanks for (indirectly) pointing out that perhaps I should have language explaining this in the "Note:" section at the bottom of the page. Also note that for the LL ranges 54M to 57M, and 60M to 73M, there are several thousand candidates held-and-issued by Primenet not yet fully appropriately TF'ed. This is because things changed about a year ago when "GPU Sieving", and additional GPU fire-power, came on-line. We're now "dropping back down" and capturing these (when possible) when the candidates become available, to "clean up". |
|
|
|
|
|
#2670 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·5·7·139 Posts |
Hey all. Just a heads up...
With the new assignment rules enforced by Primenet, the "hack" which GPU72 used to transfer low DC and LL assignments to an "Anonymous" account which could then be taken over by the GPU72 user no longer works. I have thus disabled the spiders attempting the transfers. Therefore, our supply of low DC and LL candidates will dwindle quickly. I will disable the manual assignment pages for these two work-types over the weekend, with an explanatory message. Low P-1 assignments will still be available, as will (of course) TF'ing work. For those using the GPU72 Proxy for DC or LL work, you won't need to remove the proxy line in prime.txt (but you can if you want to). It will simply pass all requests onto Primenet for fulfillment, and then both Primenet and GPU72 will be aware of your assignments (and you'll get the "pretty graphs" on GPU72 for such work). For those who want the lowest available candidates, you will need to go to http://www.mersenne.org/thresholds/ and commit to being a serious player in order to get the lowest available candidates. For the record, I consider this change on Primenet to be a very positive move; by George in particular, and the GIMPS community as a whole. The low DC and LL assignments from GPU72 were quite a bit of a hack, and often caused me grief and cost me time. Having this now appropriately managed by Primenet makes a great deal more sense. Any comments or questions, as always, are very welcome. |
|
|
|
|
|
#2671 |
|
Romulan Interpreter
Jun 2011
Thailand
7·1,373 Posts |
|
|
|
|
|
|
#2672 | ||
|
Romulan Interpreter
Jun 2011
Thailand
961110 Posts |
Quote:
Quote:
All DC exponents had P-1 done, with (very) few exceptions which constitute errors or omissions from the server or the worker in charge with the first LL. It makes no sense to do first time LL (at the time when it was done, years ago, but now also!) without doing proper TF and P-1 first. See the gimps/math page. Otherwise you may lose months (few year ago, a 33M LL took months to run on an average computer) for an expo which could be easily factored in few minutes, with some luck. So, the first time LL was only ran after the TF (enough TF or not, this is arguable, at that time the GPU were not available) and after the P-1 (enough or not, again arguable) was done. The DC exponents all had one LL done, that is why they are DC (to be Double Checked, i.e. they were checked once, now that checking process has to be verified) exponents, and not LL exponents. Is not about size, you can see in the two tables rows with the same range of exponents. Like 45M, is both in the "DC" table, and "LL" table. So, the DC exponents, they (almost) all had the P-1 done. For those of them where a factor existed in (say) 70 to 71 bits or so, the P-1 had (some chances) to find the factor (we don't talk about probability of smoothness now). So, those were eliminated already from the list. Extending TF to 71, you can only find factors which were not enough smooth to be found by P-1. Therefore, here you have about 1 in a hundred chance. Or say, a bit better than 1%, as Chris pointed out. If you can run >90 TF assignments or more, to 71 bits, in your hardware, in the same amount of time you would need to do ONE LL test using the same hardware, than you are better doing TF. If you find a factor in 90 runs, you save time. Otherwise, if you can't do 90-100 runs, or more, then you lose time. You will clear the exponents faster if you do directly DCLL, in this last case. Of course, you will not get billions of TF credit. If you care. And you will find no factors. If you care. But you will help the project more, doing directly DCLL, and additional, you may find some missed prime, and get famous! (and rich, if George will still pay the bonus for it).Choice is yours. For LL front, the things are different. Those exponents (in the first table at the given link) did not have a LL test yet. Regardless of their size. So, if you do LLTF and find a factor, you save TWO LL tests (the first one, and the DC) and some P-1 time. Also, because no P-1 was done. And because of that, you have about 1 in 71 chances to find a factor. So, if your hardware can run 71 TF assignments or more, in the same amount of time that the same hardware would need to do TWO LL tests then you better do TF (we ignore P-1 here, is insignificant). As the time to TF doubles with every bit, but we talk now about saving a double amount of LL, we can then go one bit higher, for the same range of the exponents. Careful, we are only comparing the same range of the exponents. As Chris pointed (and we totally agree here, our argument was only about the 33M range, which is at the limit), if the exponents get higher, the TF to the same bit level gets (~linearly) faster, but the LL gets (>quadratically) slower. For example, if you double the exponent, like from 30M to 60M, then the TF to 70 bits will take half of the time (the speed doubles, for 60M exponents), but the LL test of 60M will take 4 times the amount of time of a LL test of 30M, because each iteration takes (about) double of the time (with all FFT optimizations, we take the best case here), and you have to do a double number of iterations (60M instead of 30M). Doing TF to higher bits requires (about) doubling the time (for the same exponent) for each bit level, because you have to test a double amount of factor candidates. There are two times more numbers from 2^6 to 2^7, than there are from 2^5 to 2^6. So, we did TF our 30M exponent to 70 bits, now we can afford to TF our 60M exponent higher: one bit (to 71) to compensate the TF speed getting doubled. This will take the same amount of time (i.e. 30M to 70, and 60M to 71). Then, another 2 bits (to 73) to compensate for the LL test getting 4 times slower. This will get 4 times the initial time. But the LL test is also 4 times longer. This should explain for everybody. More basic than this I can't explain. The only lost end is related to P-1 part. For many people is unclear what P-1 is doing. This for the next post... |
||
|
|
|
|
|
#2673 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
2·5·7·139 Posts |
|
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Status | Primeinator | Operation Billion Digits | 5 | 2011-12-06 02:35 |
| 62 bit status | 1997rj7 | Lone Mersenne Hunters | 27 | 2008-09-29 13:52 |
| OBD Status | Uncwilly | Operation Billion Digits | 22 | 2005-10-25 14:05 |
| 1-2M LLR status | paulunderwood | 3*2^n-1 Search | 2 | 2005-03-13 17:03 |
| Status of 26.0M - 26.5M | 1997rj7 | Lone Mersenne Hunters | 25 | 2004-06-18 16:46 |