![]() |
n=480K-500K LLR discussion archive
1 Attachment(s)
All old discussions, status updates, and lresults file attachments are in this thread.
Original post: ----------------------------------- I've attached the results for n=499995-500000, k<100K. No primes were found. Gribozavr, could you post the sieve files for n=480000-481000, k<10M? |
Variable range LLR discussions + lresults files
Shouldn't the [url=http://www.mersenneforum.org/showthread.php?t=12260]sieve effort[/url] for this range be completed first before starting LLR testing? I believe the latest range status was posted in post #27 of that thread after MooMoo disappeared. I don't recall the optimal sieving depth for the range but I do know it was nowhere near there when last worked on.
Otherwise, though, splitting up the range into k<100K and k=100K-10M portions does sound like a good idea--since the project at this point is just getting back on its feet, it's nowhere near big enough to make a dent in the whole <10M range yet. Better, indeed, to tackle <100K first, and then worry about the rest later. As it is now, the sieve effort is split up by 5K n-ranges over the entire range of k's. I'm not familiar with how twin sieves scale over k-ranges, but if the project is primarily going to tackle k<100K to start with, it might be worthwhile to catch up all the sieve's n-range divisions to the same depth, then split it up instead into k<100K and k=100K-10M portions, and work on the k<100K portion to get it up to optimal and ready for LLRing sooner. |
[QUOTE=mdettweiler;214574]Shouldn't the [url=http://www.mersenneforum.org/showthread.php?t=12260]sieve effort[/url] for this range be completed first before starting LLR testing? I believe the latest range status was posted in post #27 of that thread after MooMoo disappeared. I don't recall the optimal sieving depth for the range but I do know it was nowhere near there when last worked on.
Otherwise, though, splitting up the range into k<100K and k=100K-10M portions does sound like a good idea--since the project at this point is just getting back on its feet, it's nowhere near big enough to make a dent in the whole <10M range yet. Better, indeed, to tackle <100K first, and then worry about the rest later. As it is now, the sieve effort is split up by 5K n-ranges over the entire range of k's. I'm not familiar with how twin sieves scale over k-ranges, but if the project is primarily going to tackle k<100K to start with, it might be worthwhile to catch up all the sieve's n-range divisions to the same depth, then split it up instead into k<100K and k=100K-10M portions, and work on the k<100K portion to get it up to optimal and ready for LLRing sooner. [/QUOTE] I think the k<100K portion is already up to optimal. On my PC, it takes about 6-8 minutes to LLR a k<100K candidate, and more than 10 minutes to find a factor for k<100K* *assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates. |
[quote=Oddball;214589]I think the k<100K portion is already up to optimal. On my PC, it takes about 6-8 minutes to LLR a k<100K candidate, and more than 10 minutes to find a factor for k<100K*
*assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates.[/quote] Ah, I didn't know that. In that case, indeed, it's definitely time to start LLRing k<100K. The tricky thing is that all four original n-range chunks are sieved to different depths, so k<100K over the entire n=480K-500K range can't be considered sieved to p=65T. Nonetheless, it does seem quite optimally sieved for that range given actual test results, so I suppose that's not worth worrying about now. :smile: BTW, I'm assuming that at this point people are just taking their own range chunks right out of the original sieve files--it might be a good idea to start posting some pre-split files in ranges sized to last a few days on a typical computer. That should help reduce the possibility of human error (since the original sieve file is split into 4 parts, one has to merge them back together and sort appropriately before pulling out a range). At the NPLB and CRUS projects, what we do is upload a few such pre-split files to our web server, then post links in the appropriate forum threads--see [url=http://www.mersenneforum.org/showthread.php?t=9831]here[/url] for an example. (Note that that example is almost out of available files as is; normally we'd have at least 5 there.) If you don't have web hosting space readily available to you that allows you to upload individual files, Sendspace might be a good option--that way, as opposed to attaching them here in the forum, you're not limited to 1 per post. |
[quote=Oddball;214589]I think the k<100K portion is already up to optimal. On my PC, it takes about 6-8 minutes to LLR a k<100K candidate, and more than 10 minutes to find a factor for k<100K*
*assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates.[/quote] How exactly did you figure that it takes 10 minutes to find a factor for k<100K? I checked it myself and found that I could find one factor every 7.35 seconds. Here are the full details of my check: I ran a test with tpsieve at p=65T (a 1G range starting there) on 3 <= k <= 9999999, 485000 <= n <= 489999 and found 31 factors in 227.38 CPU seconds of sieving (this does not include the ~2 minute init time or however you might consider the ~gig of RAM it needed; I ran it on a single thread). That's 7.35 seconds per factor. I don't know if I happened to find more or less factors than was expected, but at 31 I'm sure the odds are impossibly low that chance would throw this from 1 factor every 7 seconds to 1 factor every 10 minutes. I also know that 3 <= k <= 9999999, 485000 <= n <= 489999 (k<10M, an n=5k range) is rather different from 3 <= k <= 99999, 480000 <= n <= 499999 (k<100k, full n range), but none of that can account for the ~82 times difference between our two measurements. Are you sure there wasn't something else going on slowing down your sieving? Were you using tpsieve? Did you have enough memory to do what you were trying, or was it trying to swap (were you out of memory so your OS was reverting to using your much-slower hard disk as a virtual memory location; a.k.a. thrashing) and so progressing many times slower than it would if it was all in memory? (If I'm not mistaken, it scales such that if you try to sieve the full k and n range at once with tpsieve, it would need about 4GB of RAM available to it to run properly; though I'm not sure of the effect of sieving k<100K over the whole n range) BTW I'm running a 32-bit OS, so a 64-bit OS could sieve faster. That said, from the data point given and the timings of various numbers, I calculated an optimal depth of around 2000T (ignoring the effects of a 64-bit OS, etc.). |
[quote=Mini-Geek;214597]How exactly did you figure that it takes 10 minutes to find a factor for k<100K? I checked it myself and found that I could find one factor every 7.35 seconds.
Here are the full details of my check: I ran a test with tpsieve at p=65T (a 1G range starting there) on 3 <= k <= 9999999, 485000 <= n <= 489999 and found 31 factors in 227.38 CPU seconds of sieving (this does not include the ~2 minute init time or however you might consider the ~gig of RAM it needed; I ran it on a single thread). That's 7.35 seconds per factor. I don't know if I happened to find more or less factors than was expected, but at 31 I'm sure the odds are impossibly low that chance would throw this from 1 factor every 7 seconds to 1 factor every 10 minutes. I also know that 3 <= k <= 9999999, 485000 <= n <= 489999 (k<10M, an n=5k range) is rather different from 3 <= k <= 99999, 480000 <= n <= 499999 (k<100k, full n range), but none of that can account for the ~82 times difference between our two measurements. Are you sure there wasn't something else going on slowing down your sieving? Were you using tpsieve? Did you have enough memory to do what you were trying, or was it trying to swap (were you out of memory so your OS was reverting to using your much-slower hard disk as a virtual memory location; a.k.a. thrashing) and so progressing many times slower than it would if it was all in memory? (If I'm not mistaken, it scales such that if you try to sieve the full k and n range at once with tpsieve, it would need about 4GB of RAM available to it to run properly; though I'm not sure of the effect of sieving k<100K over the whole n range) BTW I'm running a 32-bit OS, so a 64-bit OS could sieve faster. That said, from the data point given and the timings of various numbers, I calculated an optimal depth of around 2000T (ignoring the effects of a 64-bit OS, etc.).[/quote] Hint: you need to count only factors in the k<100K range to get the removal rate for that range. |
[quote=mdettweiler;214599]Hint: you need to count only factors in the k<100K range to get the removal rate for that range.[/quote]
Hm. I found none with k<100K. I suppose I could expect about 31/100 (which is .31; 100 is 10M/100K) to be with k<100K, so the non-presence of one isn't surprising. Say I found .31 factors in 227 seconds (which is my experimental results adjusted to the number of factors for only k<100K, but leaving out the time difference between sieving the portion I did and a portion you might while actually sieving like that). That's 732 seconds/factor, which is approximately in-line with Oddball's statement and is past optimal even with including a twice-as-fast sieving boost from using 64-bit. But surely that's not how it really works out to be most efficient. If we can remove one factor every 7.35 seconds (faster, even, with 64-bit) by sieving more now, it must be more efficient to sieve than to run LLR on even the lowest candidates, which take about 220 seconds on my machine. |
[quote=Mini-Geek;214601]But surely that's not how it really works out to be most efficient. If we can remove one factor every 7.35 seconds (faster, even, with 64-bit) by sieving more now, it must be more efficient to sieve than to run LLR on even the lowest candidates, which take about 220 seconds on my machine.[/quote]
Yes, for the [i]whole range[/i] of k<10K, it is more efficient to sieve more. Sieving efficiency increases greatly as you increase the number of k's you're sieving together. That's why, for instance, we have much greater optimal depths for our team drives at NPLB than any of the individual k's would have on their own, and hence the greater efficiency in such a strategy. So from that angle, you're right, it would be much more efficient to sieve more before starting LLR testing [i]if[/i] the plan is to do the entire k<10M range any time soon. That has been the general indication so far, so yeah, it probably would be good to sieve more before doing LLR testing. And since there would still be plenty of LLR work available on the fixed-n n=390K effort, it's not like the project would be starved for LLR work while this range is being sieved. (Of course, this doesn't even consider whether n=390K is sieved enough to do much LLR, but that's another question for another thread.) The final decision, of course, would be up to Oddball--who I see has now been officially confirmed as a moderator. Congratulations! :smile: |
Stop LLR Start sieving
I did a test on a i7 computer.
./tpsieve -p161e12 -P162e12 -i480000-484999.txt -ftpsfactors_160T-161T.txt 64bit Linux ~1 sec/factor This was on 1 core. Start sieving stop LLR :smile: Lennart |
[quote=mdettweiler;214606]And since there would still be plenty of LLR work available on the fixed-n n=390K effort, it's not like the project would be starved for LLR work while this range is being sieved. (Of course, this doesn't even consider whether n=390K is sieved enough to do much LLR, but that's another question for another thread.)[/quote]
Really, right now this effort doesn't have LLR work ready: the n=390K and variable n efforts are both not sieved sufficiently to do LLR. It wouldn't efficient for the project for Oddball to go ahead with LLR at the current sieve levels. |
I've decided to leave the LLR range reservations open for k<100000, but the reservations for 100K<k<10M are locked until we're close to reaching an optimal sieve depth.
|
| All times are UTC. The time now is 13:36. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2023, Jelsoft Enterprises Ltd.