mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Twin Prime Search

Reply
 
Thread Tools
Old 2010-05-10, 19:04   #1
Oddball
 
Oddball's Avatar
 
May 2010

499 Posts
Default n=480K-500K LLR discussion archive

All old discussions, status updates, and lresults file attachments are in this thread.

Original post:
-----------------------------------
I've attached the results for n=499995-500000, k<100K. No primes were found.

Gribozavr, could you post the sieve files for n=480000-481000, k<10M?
Attached Files
File Type: txt lresults2.txt (30.0 KB, 128 views)

Last fiddled with by Oddball on 2010-05-16 at 08:09
Oddball is offline   Reply With Quote
Old 2010-05-10, 19:18   #2
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default Variable range LLR discussions + lresults files

Shouldn't the sieve effort for this range be completed first before starting LLR testing? I believe the latest range status was posted in post #27 of that thread after MooMoo disappeared. I don't recall the optimal sieving depth for the range but I do know it was nowhere near there when last worked on.

Otherwise, though, splitting up the range into k<100K and k=100K-10M portions does sound like a good idea--since the project at this point is just getting back on its feet, it's nowhere near big enough to make a dent in the whole <10M range yet. Better, indeed, to tackle <100K first, and then worry about the rest later.

As it is now, the sieve effort is split up by 5K n-ranges over the entire range of k's. I'm not familiar with how twin sieves scale over k-ranges, but if the project is primarily going to tackle k<100K to start with, it might be worthwhile to catch up all the sieve's n-range divisions to the same depth, then split it up instead into k<100K and k=100K-10M portions, and work on the k<100K portion to get it up to optimal and ready for LLRing sooner.
mdettweiler is offline   Reply With Quote
Old 2010-05-10, 20:36   #3
Oddball
 
Oddball's Avatar
 
May 2010

499 Posts
Default

Quote:
Originally Posted by mdettweiler View Post
Shouldn't the sieve effort for this range be completed first before starting LLR testing? I believe the latest range status was posted in post #27 of that thread after MooMoo disappeared. I don't recall the optimal sieving depth for the range but I do know it was nowhere near there when last worked on.

Otherwise, though, splitting up the range into k<100K and k=100K-10M portions does sound like a good idea--since the project at this point is just getting back on its feet, it's nowhere near big enough to make a dent in the whole <10M range yet. Better, indeed, to tackle <100K first, and then worry about the rest later.

As it is now, the sieve effort is split up by 5K n-ranges over the entire range of k's. I'm not familiar with how twin sieves scale over k-ranges, but if the project is primarily going to tackle k<100K to start with, it might be worthwhile to catch up all the sieve's n-range divisions to the same depth, then split it up instead into k<100K and k=100K-10M portions, and work on the k<100K portion to get it up to optimal and ready for LLRing sooner.
I think the k<100K portion is already up to optimal. On my PC, it takes about 6-8 minutes to LLR a k<100K candidate, and more than 10 minutes to find a factor for k<100K*

*assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates.
Oddball is offline   Reply With Quote
Old 2010-05-10, 21:00   #4
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3×2,083 Posts
Default

Quote:
Originally Posted by Oddball View Post
I think the k<100K portion is already up to optimal. On my PC, it takes about 6-8 minutes to LLR a k<100K candidate, and more than 10 minutes to find a factor for k<100K*

*assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates.
Ah, I didn't know that. In that case, indeed, it's definitely time to start LLRing k<100K.

The tricky thing is that all four original n-range chunks are sieved to different depths, so k<100K over the entire n=480K-500K range can't be considered sieved to p=65T. Nonetheless, it does seem quite optimally sieved for that range given actual test results, so I suppose that's not worth worrying about now.

BTW, I'm assuming that at this point people are just taking their own range chunks right out of the original sieve files--it might be a good idea to start posting some pre-split files in ranges sized to last a few days on a typical computer. That should help reduce the possibility of human error (since the original sieve file is split into 4 parts, one has to merge them back together and sort appropriately before pulling out a range).

At the NPLB and CRUS projects, what we do is upload a few such pre-split files to our web server, then post links in the appropriate forum threads--see here for an example. (Note that that example is almost out of available files as is; normally we'd have at least 5 there.) If you don't have web hosting space readily available to you that allows you to upload individual files, Sendspace might be a good option--that way, as opposed to attaching them here in the forum, you're not limited to 1 per post.

Last fiddled with by mdettweiler on 2010-05-10 at 21:08
mdettweiler is offline   Reply With Quote
Old 2010-05-10, 21:36   #5
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

102538 Posts
Default

Quote:
Originally Posted by Oddball View Post
I think the k<100K portion is already up to optimal. On my PC, it takes about 6-8 minutes to LLR a k<100K candidate, and more than 10 minutes to find a factor for k<100K*

*assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates.
How exactly did you figure that it takes 10 minutes to find a factor for k<100K? I checked it myself and found that I could find one factor every 7.35 seconds.

Here are the full details of my check:
I ran a test with tpsieve at p=65T (a 1G range starting there) on 3 <= k <= 9999999, 485000 <= n <= 489999 and found 31 factors in 227.38 CPU seconds of sieving (this does not include the ~2 minute init time or however you might consider the ~gig of RAM it needed; I ran it on a single thread). That's 7.35 seconds per factor.
I don't know if I happened to find more or less factors than was expected, but at 31 I'm sure the odds are impossibly low that chance would throw this from 1 factor every 7 seconds to 1 factor every 10 minutes.
I also know that 3 <= k <= 9999999, 485000 <= n <= 489999 (k<10M, an n=5k range) is rather different from 3 <= k <= 99999, 480000 <= n <= 499999 (k<100k, full n range), but none of that can account for the ~82 times difference between our two measurements.
Are you sure there wasn't something else going on slowing down your sieving? Were you using tpsieve? Did you have enough memory to do what you were trying, or was it trying to swap (were you out of memory so your OS was reverting to using your much-slower hard disk as a virtual memory location; a.k.a. thrashing) and so progressing many times slower than it would if it was all in memory? (If I'm not mistaken, it scales such that if you try to sieve the full k and n range at once with tpsieve, it would need about 4GB of RAM available to it to run properly; though I'm not sure of the effect of sieving k<100K over the whole n range)
BTW I'm running a 32-bit OS, so a 64-bit OS could sieve faster. That said, from the data point given and the timings of various numbers, I calculated an optimal depth of around 2000T (ignoring the effects of a 64-bit OS, etc.).

Last fiddled with by Mini-Geek on 2010-05-10 at 21:40
Mini-Geek is offline   Reply With Quote
Old 2010-05-10, 21:50   #6
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

141518 Posts
Default

Quote:
Originally Posted by Mini-Geek View Post
How exactly did you figure that it takes 10 minutes to find a factor for k<100K? I checked it myself and found that I could find one factor every 7.35 seconds.

Here are the full details of my check:
I ran a test with tpsieve at p=65T (a 1G range starting there) on 3 <= k <= 9999999, 485000 <= n <= 489999 and found 31 factors in 227.38 CPU seconds of sieving (this does not include the ~2 minute init time or however you might consider the ~gig of RAM it needed; I ran it on a single thread). That's 7.35 seconds per factor.
I don't know if I happened to find more or less factors than was expected, but at 31 I'm sure the odds are impossibly low that chance would throw this from 1 factor every 7 seconds to 1 factor every 10 minutes.
I also know that 3 <= k <= 9999999, 485000 <= n <= 489999 (k<10M, an n=5k range) is rather different from 3 <= k <= 99999, 480000 <= n <= 499999 (k<100k, full n range), but none of that can account for the ~82 times difference between our two measurements.
Are you sure there wasn't something else going on slowing down your sieving? Were you using tpsieve? Did you have enough memory to do what you were trying, or was it trying to swap (were you out of memory so your OS was reverting to using your much-slower hard disk as a virtual memory location; a.k.a. thrashing) and so progressing many times slower than it would if it was all in memory? (If I'm not mistaken, it scales such that if you try to sieve the full k and n range at once with tpsieve, it would need about 4GB of RAM available to it to run properly; though I'm not sure of the effect of sieving k<100K over the whole n range)
BTW I'm running a 32-bit OS, so a 64-bit OS could sieve faster. That said, from the data point given and the timings of various numbers, I calculated an optimal depth of around 2000T (ignoring the effects of a 64-bit OS, etc.).
Hint: you need to count only factors in the k<100K range to get the removal rate for that range.
mdettweiler is offline   Reply With Quote
Old 2010-05-10, 22:10   #7
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

10AB16 Posts
Default

Quote:
Originally Posted by mdettweiler View Post
Hint: you need to count only factors in the k<100K range to get the removal rate for that range.
Hm. I found none with k<100K. I suppose I could expect about 31/100 (which is .31; 100 is 10M/100K) to be with k<100K, so the non-presence of one isn't surprising. Say I found .31 factors in 227 seconds (which is my experimental results adjusted to the number of factors for only k<100K, but leaving out the time difference between sieving the portion I did and a portion you might while actually sieving like that). That's 732 seconds/factor, which is approximately in-line with Oddball's statement and is past optimal even with including a twice-as-fast sieving boost from using 64-bit.
But surely that's not how it really works out to be most efficient. If we can remove one factor every 7.35 seconds (faster, even, with 64-bit) by sieving more now, it must be more efficient to sieve than to run LLR on even the lowest candidates, which take about 220 seconds on my machine.

Last fiddled with by Mini-Geek on 2010-05-10 at 22:13
Mini-Geek is offline   Reply With Quote
Old 2010-05-10, 22:51   #8
mdettweiler
A Sunny Moo
 
mdettweiler's Avatar
 
Aug 2007
USA (GMT-5)

3·2,083 Posts
Default

Quote:
Originally Posted by Mini-Geek View Post
But surely that's not how it really works out to be most efficient. If we can remove one factor every 7.35 seconds (faster, even, with 64-bit) by sieving more now, it must be more efficient to sieve than to run LLR on even the lowest candidates, which take about 220 seconds on my machine.
Yes, for the whole range of k<10K, it is more efficient to sieve more. Sieving efficiency increases greatly as you increase the number of k's you're sieving together. That's why, for instance, we have much greater optimal depths for our team drives at NPLB than any of the individual k's would have on their own, and hence the greater efficiency in such a strategy.

So from that angle, you're right, it would be much more efficient to sieve more before starting LLR testing if the plan is to do the entire k<10M range any time soon. That has been the general indication so far, so yeah, it probably would be good to sieve more before doing LLR testing. And since there would still be plenty of LLR work available on the fixed-n n=390K effort, it's not like the project would be starved for LLR work while this range is being sieved. (Of course, this doesn't even consider whether n=390K is sieved enough to do much LLR, but that's another question for another thread.)

The final decision, of course, would be up to Oddball--who I see has now been officially confirmed as a moderator. Congratulations!
mdettweiler is offline   Reply With Quote
Old 2010-05-10, 23:12   #9
Lennart
 
Lennart's Avatar
 
"Lennart"
Jun 2007

25·5·7 Posts
Smile Stop LLR Start sieving

I did a test on a i7 computer.

./tpsieve -p161e12 -P162e12 -i480000-484999.txt -ftpsfactors_160T-161T.txt

64bit Linux

~1 sec/factor

This was on 1 core.

Start sieving stop LLR


Lennart
Lennart is offline   Reply With Quote
Old 2010-05-10, 23:12   #10
Mini-Geek
Account Deleted
 
Mini-Geek's Avatar
 
"Tim Sorbera"
Aug 2006
San Antonio, TX USA

17×251 Posts
Default

Quote:
Originally Posted by mdettweiler View Post
And since there would still be plenty of LLR work available on the fixed-n n=390K effort, it's not like the project would be starved for LLR work while this range is being sieved. (Of course, this doesn't even consider whether n=390K is sieved enough to do much LLR, but that's another question for another thread.)
Really, right now this effort doesn't have LLR work ready: the n=390K and variable n efforts are both not sieved sufficiently to do LLR.
It wouldn't efficient for the project for Oddball to go ahead with LLR at the current sieve levels.

Last fiddled with by Mini-Geek on 2010-05-10 at 23:18
Mini-Geek is offline   Reply With Quote
Old 2010-05-11, 01:44   #11
Oddball
 
Oddball's Avatar
 
May 2010

49910 Posts
Default

I've decided to leave the LLR range reservations open for k<100000, but the reservations for 100K<k<10M are locked until we're close to reaching an optimal sieve depth.

Last fiddled with by Oddball on 2010-05-11 at 01:52
Oddball is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Team drive #10 k=1400-2000 n=500K-1M gd_barnes No Prime Left Behind 61 2013-01-30 16:08
LLR reservation thread (n=480K-500K) Oddball Twin Prime Search 33 2012-01-20 05:37
Subproject #2: 500k-600k sequences to 100 digits 10metreh Aliquot Sequences 690 2009-10-14 09:02
Sieving drive for k=1003-2000 n=500K-1M gd_barnes No Prime Left Behind 160 2009-05-10 00:50
k=1005-1400 n=200K-500K results fill in gd_barnes No Prime Left Behind 55 2009-04-16 11:33

All times are UTC. The time now is 09:35.

Mon Aug 3 09:35:12 UTC 2020 up 17 days, 5:21, 0 users, load averages: 1.29, 1.33, 1.28

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.