mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Conjectures 'R Us

Reply
 
Thread Tools
Old 2014-05-13, 20:38   #78
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

1040310 Posts
Default

Looks like we're complete to P=400T! Thank you very much Lennart. :-)

Can you run a couple of primality tests and let me know how long they take? One should be a candidate at n=~3M and the other a candidate at n=~3.8M. To make it quick, what I do is get the iteration rate after about a couple of minutes of testing and then extrapolate the rate out to a full test.

Last fiddled with by gd_barnes on 2014-05-13 at 20:38
gd_barnes is online now   Reply With Quote
Old 2014-05-13, 21:08   #79
Lennart
 
Lennart's Avatar
 
"Lennart"
Jun 2007

100011000002 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
Looks like we're complete to P=400T! Thank you very much Lennart. :-)

Can you run a couple of primality tests and let me know how long they take? One should be a candidate at n=~3M and the other a candidate at n=~3.8M. To make it quick, what I do is get the iteration rate after about a couple of minutes of testing and then extrapolate the rate out to a full test.
I am doing that now. 3M is about 5000 sec on the +1 side.
[2014-05-13 22:51:37 WEDT] Server: PSPfp, Candidate: 101746*2^3000264+1 Program: llr64.exe Residue: FAAA9DBA2026663F Time: 5042 seconds
Lennart

Last fiddled with by Lennart on 2014-05-13 at 21:10
Lennart is offline   Reply With Quote
Old 2014-05-13, 22:09   #80
Lennart
 
Lennart's Avatar
 
"Lennart"
Jun 2007

25·5·7 Posts
Default

[2014-05-13 23:51:33 WEDT] Server: PSPfp, Candidate: 101746*2^3800072+1 Program: llr64.exe Residue: 6F3FBC968E0255C4 Time: 8627 seconds



Lennart
Lennart is offline   Reply With Quote
Old 2014-05-14, 00:24   #81
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

1040310 Posts
Default

Lennart,

Thank you for the tests. Much to my surprise, it looks like we need quite a bit more sieving. (Possibly because your sievers are so fast.) What I did was extrapolate the factor removal rate over just the range that we want to break off and test:

At P=384T, the sieving rate was 11,614,009P/sec. and the expected number of factors for a P=4T range was 410.49. That amounts to 839 secs/expected fac. for the entire n=1-16777216 range.

Since n=2M-5M is only 17.88% (3M/16.77M) of the entire n-range of the file, that means we are removing factors from the n=2M-5M portion at a rate of 839/.1788 = 4692 secs/fac. Next we take the test time of a candidate at 60% of that n-range (i.e. n=3.8M) and you got 8627 secs.

Now we take the test time and divided it by the current factor removal rate to see how much further we need to sieve. So...8627 / 4692 = 1.84. Multiplying 1.84 by 384T gives 706T.

And finally we make downward adjustments for 2 reasons:
1. If the sieve file stayed the same size, the removal rate would move down in a linier fashion. So if the removal rate was 4000s/fac at P=400T, it would be 8000s/fac at P=800T. But the file does not stay the same size; factors are removed. I've generally found that when we are close to 1/2x of the optimum depth, subtracting around 5% is a good estimate.
2. We will find some primes so not all of the candidates will need to be tested. Since almost all of these k's are abnormally low weight, we'll make a rough estimate of 2-3%.

So...subtracting off 7-8% of 706T leaves us in the ballpark of P=650T. Therefore I think we need to sieve to P=650T before breaking off any testing. I will change the first post to reflect where we need to sieve to.

The reason that I requested the n=3M test is that I wanted to do this same calculation for n=0-5M as if we never tested this file at all. Interestingly the optimum sieve depth was very similar (~630T), which is what I had hoped for. (The similarity was surprising since your test times were so different.)

If your sievers are still available, continue firing away.


Gary
gd_barnes is online now   Reply With Quote
Old 2014-05-14, 00:59   #82
Lennart
 
Lennart's Avatar
 
"Lennart"
Jun 2007

46016 Posts
Default

Reserving 400T-410T


Lennart
Lennart is offline   Reply With Quote
Old 2014-05-15, 04:21   #83
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

130016 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
Lennart,
At P=384T, the sieving rate was 11,614,009P/sec. and the expected number of factors for a P=4T range was 410.49. That amounts to 839 secs/expected fac. for the entire n=1-16777216 range.

Since n=2M-5M is only 17.88% (3M/16.77M) of the entire n-range of the file, that means we are removing factors from the n=2M-5M portion at a rate of 839/.1788 = 4692 secs/fac. Next we take the test time of a candidate at 60% of that n-range (i.e. n=3.8M) and you got 8627 secs.
Gary
Gary-
How is it relevant to divide factor time for the portion of the range you care about? I don't understand your methodology, and don't believe it is correct. By your logic, it would be correct to break off 3-3.5M at this time, because that's just 1/32nd of the candidates and 839*32 is >3x time per test. That's silly.

Since you will (eventually) continue the sieve from 5M-up, you want to calculate the extra time it takes to sieve 1-16M vs 5-16M, and divide that extra (marginal) time by the number of factors you'd find in 0-5M. This is the actual time per factor for the 0-5M region. As it happens, factors in 0-2M are only half as useful as 2-5M, since they only save you a double-check; you could adjust for that if you wish.

In my experience, marginal calculations lead us to sieve until the time per factor is double the time per test of the smallest candidates, and then break off a sizable (say, 0-3M in this case) chunk. But you wish to double-check your tests eventually, so each factor found saves you two tests except in 0-2M range.
VBCurtis is offline   Reply With Quote
Old 2014-05-15, 10:14   #84
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

101·103 Posts
Default

I had a feeling you might chime in here. As you probably figured out, technically what we are doing here is sieving until the factor removal rate in our specific n-range is equal to the average testing time. I realize there is a slippery slope on a couple of things so I will explain:

First, there is no guarantee that we will double check in the future so I'm not taking that into account. Second, I would never suggest using this calculation where the break off n-range is such that nmax/nmin < 2. (We're doing n=2M-5M so it's 2.5 here). Otherwise you would "optimally" sieve to P=1K (or less, lol) for, say, n=2M to 2.01M in a file with an n-range of n=~16.77M. That is you would have sieved "enough" because there is such a teeny percentage of the candidates in the "break off" range.

I realize that the marginal rate that you have showed me and demonstrated to others in the past is technically more accurate in the extreme long run, i.e. 5-10 years or more. (I think it will take us > 5 years to complete the testing of this entire file to n=16.77M.) The reason that I don't use it is that it would result in an astronomical optimum sieve depth if we were sieving a monsterous n-range. But I think we have to think along the lines of that testing/sieving software speeds have almost always changed dramatically over such long periods of time. This becomes a slippery slope in the opposite direction of the above example. Here, let's say we decided to sieve n=1 to 10^9 because technically it's more efficient in the extreme long run but we still wanted to begin testing at n=2M. We would end up sieving for years to get the marginal factor removal rate up to where it needed to be for breaking off n=2M-5M. For that matter, it might not even make sense to break off such a small n-range, which likely would result in an even longer initial sieving effort.

Last fiddled with by gd_barnes on 2014-05-15 at 22:07
gd_barnes is online now   Reply With Quote
Old 2014-05-15, 11:47   #85
Lennart
 
Lennart's Avatar
 
"Lennart"
Jun 2007

25×5×7 Posts
Default

Just in case it is not done I have checked all k's in this file with LLR up to n<600K.

I did not find anything.

Today I start 600k-800k.

Lennart
Lennart is offline   Reply With Quote
Old 2014-05-15, 22:06   #86
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

101000101000112 Posts
Default

Excellent. Thanks Lennart.
gd_barnes is online now   Reply With Quote
Old 2014-05-16, 15:00   #87
Lennart
 
Lennart's Avatar
 
"Lennart"
Jun 2007

25×5×7 Posts
Default

600k-800k done nothing found


Starting 800K-1M


Lennart
Lennart is offline   Reply With Quote
Old 2014-05-16, 23:20   #88
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

28·19 Posts
Default

Gary-
The fact that breaking off small pieces leads to comical depth estimates should be the only indication you need that your method doesn't make sense. My method is solely concerned with the shortest project length for sieving plus testing all candidates with LLR. That may not be the important metric, as you indicate by worrying about software/tech changes as time marches on. If those worries play a large role, you have chosen too large a sieve to start with- but you did so, so let's look at options right now.

Lennart's sieve speed should increase by 3-4% by using n-min of 1M instead of 1. Since these double-checks will be finished this week, there's no point to leave those in the sieve! I would use n-min of 2M, since the 1-2M range is oversieved for double-checks already.

However, the best plan would be to break off 2-3M whenever you feel like it, and continue to sieve 3-16M. When the LLR folks reach 3M, break off another 1M, etc. This assumes sieving will continue while others are LLRing; if that's not that case, I suppose the personal preference of the siever is the deciding factor. The optimal depth for 2-3M is still well higher than you are already (I estimate 2500T), but this number is so high because you built a very efficient sieve by choosing such an enormous N-range. If you had set out with a 2-5M sieve only, the optimal depth would be something like 1000T, and you'd be done already.

I don't think "optimal" matters in this case, though:
If optimal depth is 2500T, and you begin LLR at 500T instead, you can estimate the time "wasted" by LLRing instead of sieving by estimating the number of factors you would have found in the sieve and multiplying by the time difference between LLR test and marginal factor rate (usually half the average factor removal rate). You'd find factors for 5% or so of the file by going to 2500T instead of 500T, so the total time to test 2-5M might increase by 3% by starting now. You're willingly wasting 6% of sieve time already by keeping 0-2M in the sieve, so a "waste" of 3% of LLR time seems a small price to pay to get started on LLR now rather than a year from now!

tl;dr version: 'optimal' is about saving single-digit percentages of project time. Personal joy outweighs 3% time savings, so who cares about optimal?
VBCurtis is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Sieving for CRUS rebirther Conjectures 'R Us 638 2021-06-15 07:55
CRUS-like sieving challenge CRGreathouse Puzzles 24 2011-10-28 18:30
PRPnet 1st drive-R/S base 2 even-k/even-n/odd-n mdettweiler Conjectures 'R Us 153 2011-08-10 06:54
Sieving drive Riesel base 6 n=1M-2M gd_barnes Conjectures 'R Us 40 2011-01-22 08:10
Sieving drive Riesel base 6 n=150K-1M gd_barnes Conjectures 'R Us 27 2009-10-08 21:49

All times are UTC. The time now is 10:00.


Tue Jul 27 10:00:18 UTC 2021 up 4 days, 4:29, 0 users, load averages: 1.68, 1.88, 1.91

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.