mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Conjectures 'R Us

Reply
 
Thread Tools
Old 2009-01-21, 10:58   #1
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

1012710 Posts
Default Optimal sieving of large n-ranges

Quote:
Originally Posted by MrOzzy View Post
A question about Riesel base 117:

I'm currently sieving this base for all k for n=50k to n=250k.
I did a test and using LLR doing a PRP test on n=50k takes about 1250s and at n=250k it takes about 42500s.
At the moment I'm sieving at about 20G eliminating 400k p/sec and 25 sec/n with 59500 remaining pairs.

I never tried to do such a big sieving effort so I'm not sure how to handle it efficiently, and I'm not sure what would be the optimal sieve depth ..

Anyone around who can give me some advice?
Admin edit: Here is a response that I gave to MrOzzy about optimally sieving large n-ranges. The question has come up several times so I thought this would be best to have it's own thread.


Ozzy,

Here is what I would recommend:

1. Begin sieving the entire range.

2. LLR/Phrot test one candidate at 70% of the n=50K-100K range, i.e. n=85K, and make note of the testing time.

3. Sieve until your sieving removal rate is the same as the test in #2.

4. LLR/Phrot test the n=50K-100K range being sure to remove k's with primes from the entire n=50K-250K sieve file as you go.

5. Delete n=50K-100K from your sieve file and begin sieving the entire rest of the file.

6. LLR/Phrot test one candidate at 70% of the n=100K-150K range, i.e. n=135K, and make note of the testing time.

7. Sieve until your sieve removal rate is the same as the test in #6.

8. LLR/Phrot test the n=100K-150K range being sure to remove k's with primes from the entire n=100K-250K sieve file as you go.

9. Repeat steps 5 thru 8 for the n=150K-200K and 200K-250K ranges. In other words, when sieving, always sieve the entire remaining part of the file and make sure that k's with primes are removed from the entire file.

Because of the extremely long testing times of such a huge base, I highly recommend breaking it up in n=50K pieces like this. Normally, I'd just break off one piece, perhaps by doing n=50K-125K followed by n=125K-250K but here, the CPU time savings is significant to keep it in smaller pieces.

And finally: From a math/CPU perspective: The most CPU-efficient way to sieve and primality test is to run 3-5 prime tests, sieve the rest a teeny bit, run another 3-5 tests, sieve the rest a bit more, etc...you get the point. If there was an automated process that would switch back-and-forth between prime testing and sieving, that would be excellent. But since we have to have manual intervention, it's up to you how much you want to intervene and break off pieces. I don't know, maybe you'll want to go to n=25K pieces as you get to n>150K. It's totally up to you but I think this should give you a clear understanding of the general way to optimally sieve a large n-range.


Gary

Last fiddled with by gd_barnes on 2009-01-21 at 11:05 Reason: admin edit
gd_barnes is online now   Reply With Quote
Old 2009-01-21, 16:29   #2
MrOzzy
 
MrOzzy's Avatar
 
Apr 2008
Antwerp, Belgium

3·19 Posts
Default

Thanks for your explenation.

Maybe it would be interesting to start sieving again after each fft length jump ... On this kind of a base these increses in testing times can be quite painfull to be honest. On one fft length increse around n=49,5k the testing times incresed from about 995s to 1230s .. ouch!

If there just would be some way to find out when they happen ...

I'm testing this base one a dual core machine, so it might be interesting to LLR test on one core and sieve on the other. If done properly both would keep eliminating pairs at about the same speed.
MrOzzy is offline   Reply With Quote
Old 2009-01-22, 04:38   #3
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

100111100011112 Posts
Default

Quote:
Originally Posted by MrOzzy View Post
Thanks for your explenation.

Maybe it would be interesting to start sieving again after each fft length jump ... On this kind of a base these increses in testing times can be quite painfull to be honest. On one fft length increse around n=49,5k the testing times incresed from about 995s to 1230s .. ouch!

If there just would be some way to find out when they happen ...

I'm testing this base one a dual core machine, so it might be interesting to LLR test on one core and sieve on the other. If done properly both would keep eliminating pairs at about the same speed.

Very astute analysis! Technically, I could have said the same thing but was trying to make things easier. If your main goal is to reduce CPU time, you would be correct to sieve, primality test, break off range, etc. on each fftlen change.

One mathematical note there: If you are breaking off at every fftlen jump, you only need to test a candidate at 50% of the n-range (vs. 70%) between each jump to determine the sieve dropping rate to stop at. That's because the testing times are increasing in a straight line between the jumps. It's those jumps that gives them their exponential increase (i.e. varies with the square of the n-value) over the long run.

To get to the fftlen jump points, you could do it by brute force by running many tests for a few secs. each. LLR will tell you what the fftlen is. An easy way to "traverse" down to where the jump is:

Let's say the actual jump is at n=10K and that the fftlen jumps from 6K to 7K. You suspect it is somewhere between n=9K and 12K so:

Split the difference between where you think it is and test n=10500. Since the fftlen there is 7K, you go down by splitting the difference between n=10500 and 9K, i.e. n=9750. There, the fftlen is 6K so you go up. Splitting diff. between n=9750 and 10500, you have n=10125, etc. until you get as close as you want to the actual jump point.

Doing it this way, you could nail every fftlen jump within n=100 for a very large n-range such as yours with about 10-15 mins. manual effort. To nail it down to the exact n for every jump might require 20-30 mins.

So if you desire almost perfect optimal CPU time and have time to mess around with this plus going back and forth between sieving and primality testing, then doing this is definitely the way to go.


Gary
gd_barnes is online now   Reply With Quote
Old 2009-01-25, 15:36   #4
masser
 
masser's Avatar
 
Jul 2003

3·443 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
2. LLR/Phrot test one candidate at 70% of the n=50K-100K range, i.e. n=85K, and make note of the testing time.

3. Sieve until your sieving removal rate is the same as the test in #2.
For extremely large n-ranges where the testing times at the beginning of the range (say 5 mins/test) are insignificant compared to the final testing times (say 2 hours/test), the above two steps should be modified to:

2. LLR/Phrot test the largest candidate in the range and make note of the testing time. You don't actually have to complete the test, it can be estimated from the iteration times.

3. Sieve until your sieving removal rate is one-third of the time in #2.

Last fiddled with by masser on 2009-01-25 at 15:41
masser is offline   Reply With Quote
Old 2009-01-25, 15:37   #5
masser
 
masser's Avatar
 
Jul 2003

3·443 Posts
Default

Quote:
Originally Posted by gd_barnes View Post

One mathematical note there: If you are breaking off at every fftlen jump, you only need to test a candidate at 50% of the n-range (vs. 70%) between each jump to determine the sieve dropping rate to stop at. That's because the testing times are increasing in a straight line between the jumps. It's those jumps that gives them their exponential increase (i.e. varies with the square of the n-value) over the long run.
Minor quibble - it's not an exponential increase, but a polynomial increase.
masser is offline   Reply With Quote
Old 2009-01-25, 18:03   #6
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×7×199 Posts
Default

Gary's analysis does not produce an optimal sieve, if by optimal you mean the sieve depth that results in the shortest project length.

For a large n-range, such as 50k-250k that Mr Ozzy mentioned, a much deeper sieve pays dividends. Try this experiment:

Sieve to Gary's suggested depth. Before breaking off a piece for primality work, set it to sieve an additional few T (a range with 200 or so factors). Using the -v switch, note the expected number of factors and the expected time to completion.

Now, remove the chunk you plan to LLR/Phrot. Repeat the above sieve setup, again just to note the expected number of factors and expected time to completion.

The difference between these times and expected factors is the factor cost to continue sieving the range you are considering breaking off. If you divide those numbers, you will almost certainly find that the factor cost is lower than LLR time, thus continuing to sieve with the small piece included is wise.

I have found through many such tests that breaking off a small piece is most efficient when the sieve removes factors at just over half the speed (double the time!) of a primality test on the smallest candidate. This assumes you plan to test the entire file eventually; with this project, one should sieve less deep to correct for the speedup entailed in finding a prime. You could estimate the probability of finding a prime in your entire sieve and correct for this reduction in project length; or just sieve some arbitrary fraction less deep. I have not done the math to correct for this effect yet, so I can't provide more detail.

This improvement in optimal depth calculation only really matters for sieves with large n-ranges and VERY long primality tests. 40,000 seconds per test at the large end of the range certainly qualifies! For sieves with longest tests in the 1000-1500 sec range, the project time saved by using this method is not significant. Also, "significant" is relative- I do not claim this thinking is necessary, merely that it is more optimal than Gary's method.

Pointing out flaws in my logic welcomed, as I use this method for a very large sieve that has run over many dozens of CPU-months.
VBCurtis is online now   Reply With Quote
Old 2009-01-25, 18:56   #7
MrOzzy
 
MrOzzy's Avatar
 
Apr 2008
Antwerp, Belgium

3×19 Posts
Default

Just a remark, but it might matter a lot.
So far I only have access to 32 bit machines. If I recall correctly sieving benefits a lot more from 64 bit as LLR or Porth does, or am I wrong?

This might tilt the scale more to breaking off sooner again.

Also keep in mind that every k will be removed from the sieve once a prime has been found. This happened before when I sieved from 10k to 50k (took quite a while) and found a prime for one of the k at n=10468. At that time I had 20k left so I wasted almost 5% of my sieving time.
This is why I would like to start PRP testing as quick as possible (in very small chuncks). If I happen to find a prime near 50k (which is quite likely since my last prime was at n=32421) I wasted 10% of the time sieving ...
MrOzzy is offline   Reply With Quote
Old 2009-01-25, 20:27   #8
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT)

22·5·283 Posts
Default

Quote:
Originally Posted by MrOzzy View Post
Just a remark, but it might matter a lot.
So far I only have access to 32 bit machines. If I recall correctly sieving benefits a lot more from 64 bit as LLR or Porth does, or am I wrong.
are they 32-bit cpus or 64-bit cpus with a 32-bit operating system
do the cpus have virtualization technology
see http://mersenneforum.org/showthread.php?t=10991 from post 11
henryzz is offline   Reply With Quote
Old 2009-01-26, 07:15   #9
MrOzzy
 
MrOzzy's Avatar
 
Apr 2008
Antwerp, Belgium

3·19 Posts
Default

Quote:
Originally Posted by henryzz View Post
are they 32-bit cpus or 64-bit cpus with a 32-bit operating system
do the cpus have virtualization technology
see http://mersenneforum.org/showthread.php?t=10991 from post 11
They are 32-bit cpu's so no point in trying to run 64-bit.
MrOzzy is offline   Reply With Quote
Old 2009-01-26, 08:40   #10
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×7×199 Posts
Default

Quote:
Originally Posted by MrOzzy View Post
Just a remark, but it might matter a lot.
So far I only have access to 32 bit machines. If I recall correctly sieving benefits a lot more from 64 bit as LLR or Porth does, or am I wrong?

This might tilt the scale more to breaking off sooner again.

Also keep in mind that every k will be removed from the sieve once a prime has been found. This happened before when I sieved from 10k to 50k (took quite a while) and found a prime for one of the k at n=10468. At that time I had 20k left so I wasted almost 5% of my sieving time.
This is why I would like to start PRP testing as quick as possible (in very small chuncks). If I happen to find a prime near 50k (which is quite likely since my last prime was at n=32421) I wasted 10% of the time sieving ...
Both Gary's and my suggestions compare sieve time on your machine to LLR/Phrot time on your machine, so bitness of the sieve program matters not. You compare only to your own machine.

Second, sieve time scales with the square root of number of k's, so a 10k sieve is roughly 6% slower than a 9k sieve. Consider how long you spend in total sieving. Take 6% of that, and that is in effect the "wasted" time if you find a prime at the very outset of LLR/Phrot work. Since you may not find a prime at the outset, it is hard to consider the effort wasted ahead of time, you know?

If you can model the expected number of primes in a particular block, you can adjust the sieve depth. It's not a very big adjustment for a small number of k's like your effort, as the chance of prime is rather small.
Finally, the optimal depth is not a sensitive calculation- you can oversieve or undersieve by 20% and hardly change the total project length. If you feel better sieving less, sieve less. My method suggests sieving ~twice as deep as LLR time before breaking off a piece, so mildly undersieving would be perhaps 50% or 60% longer than LLR time for the smallest candidate.
-Curtis
VBCurtis is online now   Reply With Quote
Old 2009-01-26, 19:25   #11
michaf
 
michaf's Avatar
 
Jan 2005

1110111112 Posts
Default

* warning* obsolete post ahead :
(Bugger... I should read the entire thread before posting...)

Don't forget that finding a prime will reduce the time needed to llr the rest of the broken off file. Thus you'll need to sieve a little bit less deep that Curtis suggests.

(But anyways, it's all human intervention, and time-consuming, so any guess is a good guess, as long as it is in the right ballpark :) )

Last fiddled with by michaf on 2009-01-26 at 19:26 Reason: added warning
michaf is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
optimal B1 Mini-Geek Factoring 4 2011-05-27 12:19
Sieving a Large Number of k's Dougal Conjectures 'R Us 1 2010-06-17 21:48
Optimal Sieving Time jasong Sierpinski/Riesel Base 5 10 2009-01-25 15:56
optimal parameters for GMP-ECM , -oe+ , -I Walter Nissen GMP-ECM 16 2007-03-20 19:35
Optimal ECM Escalation? wblipp ElevenSmooth 16 2004-08-13 19:01

All times are UTC. The time now is 22:34.

Thu May 28 22:34:42 UTC 2020 up 64 days, 20:07, 1 user, load averages: 1.10, 1.38, 1.50

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.