![]() |
![]() |
#1 |
May 2010
499 Posts |
![]()
All old discussions, status updates, and lresults file attachments are in this thread.
Original post: ----------------------------------- I've attached the results for n=499995-500000, k<100K. No primes were found. Gribozavr, could you post the sieve files for n=480000-481000, k<10M? Last fiddled with by Oddball on 2010-05-16 at 08:09 |
![]() |
![]() |
![]() |
#2 |
A Sunny Moo
Aug 2007
USA (GMT-5)
3×2,083 Posts |
![]()
Shouldn't the sieve effort for this range be completed first before starting LLR testing? I believe the latest range status was posted in post #27 of that thread after MooMoo disappeared. I don't recall the optimal sieving depth for the range but I do know it was nowhere near there when last worked on.
Otherwise, though, splitting up the range into k<100K and k=100K-10M portions does sound like a good idea--since the project at this point is just getting back on its feet, it's nowhere near big enough to make a dent in the whole <10M range yet. Better, indeed, to tackle <100K first, and then worry about the rest later. As it is now, the sieve effort is split up by 5K n-ranges over the entire range of k's. I'm not familiar with how twin sieves scale over k-ranges, but if the project is primarily going to tackle k<100K to start with, it might be worthwhile to catch up all the sieve's n-range divisions to the same depth, then split it up instead into k<100K and k=100K-10M portions, and work on the k<100K portion to get it up to optimal and ready for LLRing sooner. |
![]() |
![]() |
![]() |
#3 | |
May 2010
499 Posts |
![]() Quote:
*assuming a sieve depth of p=65T. I've also been using an older sieve file that someone emailed me. With a newer sieve file, it should take even longer to find a factor for k<100K since there are fewer candidates. |
|
![]() |
![]() |
![]() |
#4 | |
A Sunny Moo
Aug 2007
USA (GMT-5)
624910 Posts |
![]() Quote:
The tricky thing is that all four original n-range chunks are sieved to different depths, so k<100K over the entire n=480K-500K range can't be considered sieved to p=65T. Nonetheless, it does seem quite optimally sieved for that range given actual test results, so I suppose that's not worth worrying about now. ![]() BTW, I'm assuming that at this point people are just taking their own range chunks right out of the original sieve files--it might be a good idea to start posting some pre-split files in ranges sized to last a few days on a typical computer. That should help reduce the possibility of human error (since the original sieve file is split into 4 parts, one has to merge them back together and sort appropriately before pulling out a range). At the NPLB and CRUS projects, what we do is upload a few such pre-split files to our web server, then post links in the appropriate forum threads--see here for an example. (Note that that example is almost out of available files as is; normally we'd have at least 5 there.) If you don't have web hosting space readily available to you that allows you to upload individual files, Sendspace might be a good option--that way, as opposed to attaching them here in the forum, you're not limited to 1 per post. Last fiddled with by mdettweiler on 2010-05-10 at 21:08 |
|
![]() |
![]() |
![]() |
#5 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
11×389 Posts |
![]() Quote:
Here are the full details of my check: I ran a test with tpsieve at p=65T (a 1G range starting there) on 3 <= k <= 9999999, 485000 <= n <= 489999 and found 31 factors in 227.38 CPU seconds of sieving (this does not include the ~2 minute init time or however you might consider the ~gig of RAM it needed; I ran it on a single thread). That's 7.35 seconds per factor. I don't know if I happened to find more or less factors than was expected, but at 31 I'm sure the odds are impossibly low that chance would throw this from 1 factor every 7 seconds to 1 factor every 10 minutes. I also know that 3 <= k <= 9999999, 485000 <= n <= 489999 (k<10M, an n=5k range) is rather different from 3 <= k <= 99999, 480000 <= n <= 499999 (k<100k, full n range), but none of that can account for the ~82 times difference between our two measurements. Are you sure there wasn't something else going on slowing down your sieving? Were you using tpsieve? Did you have enough memory to do what you were trying, or was it trying to swap (were you out of memory so your OS was reverting to using your much-slower hard disk as a virtual memory location; a.k.a. thrashing) and so progressing many times slower than it would if it was all in memory? (If I'm not mistaken, it scales such that if you try to sieve the full k and n range at once with tpsieve, it would need about 4GB of RAM available to it to run properly; though I'm not sure of the effect of sieving k<100K over the whole n range) BTW I'm running a 32-bit OS, so a 64-bit OS could sieve faster. That said, from the data point given and the timings of various numbers, I calculated an optimal depth of around 2000T (ignoring the effects of a 64-bit OS, etc.). Last fiddled with by TimSorbet on 2010-05-10 at 21:40 |
|
![]() |
![]() |
![]() |
#6 | |
A Sunny Moo
Aug 2007
USA (GMT-5)
3×2,083 Posts |
![]() Quote:
|
|
![]() |
![]() |
![]() |
#7 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
11·389 Posts |
![]() Quote:
But surely that's not how it really works out to be most efficient. If we can remove one factor every 7.35 seconds (faster, even, with 64-bit) by sieving more now, it must be more efficient to sieve than to run LLR on even the lowest candidates, which take about 220 seconds on my machine. Last fiddled with by TimSorbet on 2010-05-10 at 22:13 |
|
![]() |
![]() |
![]() |
#8 | |
A Sunny Moo
Aug 2007
USA (GMT-5)
141518 Posts |
![]() Quote:
So from that angle, you're right, it would be much more efficient to sieve more before starting LLR testing if the plan is to do the entire k<10M range any time soon. That has been the general indication so far, so yeah, it probably would be good to sieve more before doing LLR testing. And since there would still be plenty of LLR work available on the fixed-n n=390K effort, it's not like the project would be starved for LLR work while this range is being sieved. (Of course, this doesn't even consider whether n=390K is sieved enough to do much LLR, but that's another question for another thread.) The final decision, of course, would be up to Oddball--who I see has now been officially confirmed as a moderator. Congratulations! ![]() |
|
![]() |
![]() |
![]() |
#9 |
"Lennart"
Jun 2007
112010 Posts |
![]()
I did a test on a i7 computer.
./tpsieve -p161e12 -P162e12 -i480000-484999.txt -ftpsfactors_160T-161T.txt 64bit Linux ~1 sec/factor This was on 1 core. Start sieving stop LLR ![]() Lennart |
![]() |
![]() |
![]() |
#10 | |
Account Deleted
"Tim Sorbera"
Aug 2006
San Antonio, TX USA
102678 Posts |
![]() Quote:
It wouldn't efficient for the project for Oddball to go ahead with LLR at the current sieve levels. Last fiddled with by TimSorbet on 2010-05-10 at 23:18 |
|
![]() |
![]() |
![]() |
#11 |
May 2010
499 Posts |
![]()
I've decided to leave the LLR range reservations open for k<100000, but the reservations for 100K<k<10M are locked until we're close to reaching an optimal sieve depth.
Last fiddled with by Oddball on 2010-05-11 at 01:52 |
![]() |
![]() |
![]() |
Thread Tools | |
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Team drive #10 k=1400-2000 n=500K-1M | gd_barnes | No Prime Left Behind | 61 | 2013-01-30 16:08 |
LLR reservation thread (n=480K-500K) | Oddball | Twin Prime Search | 33 | 2012-01-20 05:37 |
Subproject #2: 500k-600k sequences to 100 digits | 10metreh | Aliquot Sequences | 690 | 2009-10-14 09:02 |
Sieving drive for k=1003-2000 n=500K-1M | gd_barnes | No Prime Left Behind | 160 | 2009-05-10 00:50 |
k=1005-1400 n=200K-500K results fill in | gd_barnes | No Prime Left Behind | 55 | 2009-04-16 11:33 |