![]() |
|
|
#12 |
|
May 2007
Kansas; USA
1039510 Posts |
P=93T-97T is complete. Now working on P=97T-100T. ETA is Feb. 1st.
I currently only have one quad on it. After I finish this range, I'll bring a 2nd quad to the mix. |
|
|
|
|
|
#13 |
|
May 2007
Kansas; USA
1039510 Posts |
P-97T-100T finished several days ago. Reserving P=100T-104T.
|
|
|
|
|
|
#14 |
|
May 2007
Kansas; USA
33·5·7·11 Posts |
Vaughan is reserving P=104T-112T. Max will be setting up a care package for him within the next day or so.
|
|
|
|
|
|
#15 |
|
Jan 2005
Sydney, Australia
5×67 Posts |
OK. I'm ready when you are guys - bring it on.
|
|
|
|
|
|
#16 |
|
"Curtis"
Feb 2005
Riverside, CA
4,861 Posts |
Consider using the FFT jumps to decide where to break off pieces of your sieve. LLR tests jump ~20% at each FFT increase, while time increases linearly between jumps. This gives a logical point to stairstep the sieve. Your current depth is likely enough for the lowest FFT in your range, or getting there (see next paragraph).
Also, it looks like your optimal sieve calculation is off by a factor of two. If you are interested in minimum project length, you should be sieving until factors-found time is double the time taken for an LLR test of the range you are looking to break off. The actual multiple is more like 2.2 or 2.3 in practice, but double gives a conservative place to break off a range. The 70% idea you use only holds when deciding when to end a sieve completely, rather than when to break off a small piece. It seems you are neglecting the difference between marginal time and average time, much as a freshman Econ student does. -Curtis Last fiddled with by VBCurtis on 2010-02-14 at 10:22 |
|
|
|
|
|
#17 |
|
May 2007
Kansas; USA
33×5×7×11 Posts |
Thanks for the info. Mike. I had been questioning the 70% rule when breaking off ranges myself for just the last few months but hadn't taken the time to work out the math to see if it was off significantly. Clearly it still holds for the final range of a sieve but your analysis shows that it is off quite a bit for breaking off ranges.
I suppose if you considered double checking, you should probably sieve twice as far. But on our other drives, I haven't considered that because who knows if we'll ever actually double check such large and huge n-ranges (heck, I might be dead by then, lol) and besides software and machines are likely to be so much faster > 3-4 years from now when we might double check vs. now. I don't want to spend today's resources to do too much of tomorrow's work that is too far out. Too many efficiencies are gained in the mean time. OK guys, let's stop the sieve at P=112T (vs. 120T) and I'll see if I can determine what range that is sufficient for breaking off using Mike's math here. Previously I was looking at n=1M-1.3M but this depth may be sufficient for n=1M-1.5M. Vaughan, if you happen to see this, if you can look at the "ETA" date for each of your cores and report when your latest range will complete, that will give me a good idea of when we can start the drive. My range of P=100T-104T will finish in ~8 hours or early Monday afternoon. It will be nice to have some n>1M tests. Edit: Mike on a lighter side, this is rather a funny about face turn of events. I used to harass you about over-sieving at RPS. Now you're demonstrating me over-sieving here. Full circle I guess. :-) Gary Last fiddled with by gd_barnes on 2010-02-15 at 09:38 |
|
|
|
|
|
#18 | |
|
Jan 2005
Sydney, Australia
5·67 Posts |
Quote:
|
|
|
|
|
|
|
#19 |
|
May 2007
Kansas; USA
33·5·7·11 Posts |
Thanks for the timeframe Vaughan.
Meanwhile: P=100T-104T is complete. No primes found. Lots of composites found though.
Last fiddled with by gd_barnes on 2010-02-16 at 00:42 |
|
|
|
|
|
#20 |
|
"Curtis"
Feb 2005
Riverside, CA
12FD16 Posts |
Gary-
My comments have nothing to do with doublechecking, and I certainly did not suggest you are oversieving. My math consists of considering two situations- breaking off a file for LLR, and not doing so. For the next sieve block planned, say 112 to 120T, run it both with and without the 1.0 to 1.3 range just long enough to see ETA and number of expected factors. Take the difference in ETA, and divide by the additional number of factors the full sieve would find. This is the *marginal* time per factor for the lower range. Using average time to decide when to break off a range makes no sense, as the sieve will continue onward no matter what. Since sieve efficiency rises with square root of n-range, it turns out the marginal effect is to find factors at about half the average time. Thus, the average time should be roughly double the average LLR time for the range you consider breaking off. However, actually performing the head-to-head test above is the best way- who cares about theory if it doesn't shorten the project? A further side effect of this is that it rarely makes sense to break off a piece of a sieve when the total N-range is a factor of 2 or less from nmin to nmax. For instance, I have a sieve running that started as 500k to 4M; I have broken off up to 2.1M by now, and do not plan to break off another piece because it won't shorten the total testing+sieve time. Try it. Let me know what you learn, as the effect may not be a factor of 2 for every sieve- most of my sieves have larger n-ranges, so your optimal ratio of avg sieve time to LLR time may be 1.8, or possibly lower. -Curtis Last fiddled with by VBCurtis on 2010-02-16 at 08:01 |
|
|
|
|
|
#21 | |
|
May 2007
Kansas; USA
33×5×7×11 Posts |
Quote:
OK, I'll try exactly that. It's been a while since I've had econ classes so I didn't think to draw the correlation between from the difference between average and marginal removal rates here as you described earlier. The same thing applies with a marginal effective income tax rate. You can't use your average tax rate for making decisions late in the year that may greatly affect current year's income. Your marginal effective tax bracket must be used. In other words, the marginal sieving rate totally makes sense to me now that you bring it up. No, I didn't take it at all as you saying we were over sieved. You were just pointing out the math, something that I can always relate to. :-) I merely assumed that we probably had by this point for the n=1M-1.3M range. That is interesting about it not making sense to break off pieces when nmax/nmin < 2. We're exactly 2 here. I would not have considered that. Hum. It might make sense to sieve the entire thing to the same depth. I'll have to look closely at this. But, contrary to the exacting math on all of this, which I agree will definitely give the shortest project length in a static situation, there is one small point where me might differ: I'm reluctant to sieve a FULL range, even if nmax/nmin ratio is <= 2 (like it is here) to its optimum depth because I see little chance of NPLB completing this huge range in < ~2 years. I know RPS has quite a few more resources than NPLB right now but even so, the below thoughts are something that you still might consider for your k=5 n=2.1M-4M sieving effort (assumed). If you feel like it will take RPS > ~2 years to LLR n=2.1M-4M, you might consider sieving to a less than optimal depth, start testing and see where you get, and then at some point, likely after at least 1 year, continue sieving the part of the range that has not yet been tested. At that point, there's a good chance that sr1sieve/LLR/hardware/etc. are going to be marginally or substantially faster. What I've done with the above is "act like" we're breaking off (or only testing) a smaller range; in your case maybe n=2.1M-3M and sieving to its optimal depth for the full range. So what you would do is test a candidate at n=2.7M and sieve until your removal rate equals that time. Of course this doesn't take into account the marginal rate so the math is certainly off. I'll leave it to you get the math more exacting. But I think the idea still holds: You don't want to use too many of today's resources to do tasks that won't be done for ~2 years or more due to increases in future speed. (Of course the opposite also holds: You don't want to wait forever just becuase speeds will be faster either. Otherwise we'd never do anything. lol) Thanks again for the info. To all: I'll post what I come up with in the next posting. Gary Last fiddled with by gd_barnes on 2010-02-16 at 09:30 |
|
|
|
|
|
|
#22 |
|
Jan 2005
Sydney, Australia
5·67 Posts |
Progress report:
My last 4T range is at 67 to 68 percent completed, depending on the core. ETA is still Feb 25. |
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| GPU sieving drive part III: k<10000 n=3M-6M | mdettweiler | No Prime Left Behind | 19 | 2011-02-17 21:13 |
| GPU sieving drive part II: k<10000 n=2M-3M | mdettweiler | No Prime Left Behind | 44 | 2010-11-28 10:59 |
| Bigger and better GPU sieving drive: Discussion | henryzz | No Prime Left Behind | 75 | 2010-10-31 16:51 |
| GPU sieving drive for k<=1001 n=1M-2M | mdettweiler | No Prime Left Behind | 11 | 2010-10-04 22:45 |
| Sieving drive for k=2000-3400 n=50K-1M | gd_barnes | No Prime Left Behind | 145 | 2009-06-23 18:28 |