mersenneforum.org  

Go Back   mersenneforum.org > Prime Search Projects > Conjectures 'R Us

Reply
 
Thread Tools
Old 2008-07-05, 16:03   #23
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

2·3·7·23 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
It's been calculated somewhere but I can't remember where. Perhaps on the Rieselsieve or SOB sites somewhere.

As Micha stated earlier, having BOINC involved won't prove many of our conjectures. I'll take that further and state that having BOINC, RieselSieve, SOB, and PrimeGrid combined involved won't prove many of our conjectures in our lifetimes and also state that many will not be proven in the next 75-100 years even with a doubling of computer speed every 2 years, which is unlikely to continue.

Sieving should be 5-10% of the total of any effort. Having them sieve is no more effective than having them LLR. It's just a matter of where we want to put our resources.

Continuing to sieve and sieve and sieve ad infinatum does not save CPU time. It wastes time. As Anon has said, RieselSieve has ranges so over-sieved, it's ridiculous. If they had used more of their resources to LLR, they'd be far further along than they are. I think that SOB has done a much better job of managing resources between sieving and LLRing.

Don't mistake constant sieving for saving total CPU time. Sure it saves LLRing time but it wastes total CPU time. Sieving to the optimal depth at varying n-range is what saves the most time overall. For instance, you mentioned sieving n=100K-1M on bases 22 and 23 to P=25T or perhaps even P=100T. That would likely be a waste of time to sieve the entire range so high. You would probably be better of sieving to P=5T-10T, break off the n=100K-150K (or n=100K-200K range), LLR it, sieve 150K-1M (or 200K-2M) to P=10T-20T, break off another range etc. Also, I think you'll find it gets very boring continually sieving for months at a time. Doing one then the other and then repeating that process is both the most interesting and the most optimal.

Niether RieselSieve nor SOB would help us solve other bases. They have way more than plenty of work as it is. BOINC and PrimeGrid would help us if you don't mind a bunch of users doing the searching that know nothing about primes. Personally, I don't like it because it removes a lot of this human interaction among people that are passionate about math or computers or primes that I generally enjoy. That's how I learn things from people who know much more than me about things.

Here I/we aim to pave the way for future projects by searching many bases to a moderate to high n-range. For instance, I think you and Micha could start a prime search project to solve the base 3 conjectures (and/or bases 7 and 15) since you both seem to enjoy base 3. I'd gladly ship one or more of those off to another project. We have plenty of work even without those. I'm not greedy. You could set up your own forum, promote things, create and update web pages, determine how things should be searched, etc. It's a lot of work but once you get it going, it's not so bad. If you ever want to consider that, I can point you in approximately the right direction and I'd certainly pitch in some cores to assist in searching on them at times.


Gary
Well I'm actually considering to attack the Sierpinski Base 3 after I'm done with the Base 19 (39 primes lower at ~11.7K n)... Since at least 3 of the 4 sieve files has been sieved to your suggested 5T, I'm actually considering to release those for public availeability and only focus on the Base 3 Sierpinski for a while, or at least untill I feel like being able to handle more manual work than I'm at the moment So if you would like Gary, I can send you the .abcd files and the checkpoint files for each of the Riesel and Sierpinski bases to you in a short while.

I also feels like it is more motivating for me to run the Base 3 conjectures on both my old aswell as my new PC, since it was what my Quad at least where bought for

Take care my friends

Kenneth!
KEP is offline   Reply With Quote
Old 2008-07-05, 23:56   #24
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

101·103 Posts
Default

Quote:
Originally Posted by KEP View Post
Well I'm actually considering to attack the Sierpinski Base 3 after I'm done with the Base 19 (39 primes lower at ~11.7K n)... Since at least 3 of the 4 sieve files has been sieved to your suggested 5T, I'm actually considering to release those for public availeability and only focus on the Base 3 Sierpinski for a while, or at least untill I feel like being able to handle more manual work than I'm at the moment So if you would like Gary, I can send you the .abcd files and the checkpoint files for each of the Riesel and Sierpinski bases to you in a short while.

I also feels like it is more motivating for me to run the Base 3 conjectures on both my old aswell as my new PC, since it was what my Quad at least where bought for

Take care my friends

Kenneth!

OK but P=5T was only meant as a very rough estimate of the optimal sieve depth for breaking off n=100K-150K or 100K-200K for bases 22 and 23. You took an example that I was giving as fact. It is not. It was an example of how it is POSSIBLE that over-sieving occurs. Have you actually LLR'd a candidate and verified the optimum sieve depth for breaking off n=100K-200K or 100K-150K?

Regardless, thank you for the files. I'll do what I can with them. The first thing I'll have to do is determine the optimum sieve depth. I will now unreserve them for you.

I'm confused. You have reserved Riesel base 3 to k=500M. Why the sudden interest in Sierp base 3? Don't you need to finish Riesel base 3 to k=500M first? Micha is already searching Sierp base 3 to k=200M.

Before starting on Sierp base 3, please report a status on Riesel base 3 or let us know if you're releasing it before starting on the Sierp side of things.


Thanks,
Gary
gd_barnes is offline   Reply With Quote
Old 2008-07-06, 06:00   #25
masser
 
masser's Avatar
 
Jul 2003
wear a mask

22·3·139 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
But if you did that, you'd have to sieve many of our efforts to n=10M or more and sieve them into the quadrillions. (Try sieving base 19 to n=10^18 and see what the optimum would be! We'd never be searching for primes even if sr(2)sieve could sieve such a huge n-range.) After all, mathematically, that's the way to spend the least CPU time over, what, 100 years or more when we're no longer alive? That would be laughable. That's why we only chose to sieve n=100K-400K on Sierp base 6. I mean why sieve n=100K-1M when we may not even have LLR at n=400K for 4-5 years when many of us may not be interested in CRUS anymore.

People SAY that seiving larger n-ranges is more efficient. I say it's all in the way you look at it. IMHO, sieving a larger range than the n-max being 4 times the n-min (i.e. n=100K-400K) is a waste of CURRENT resources if the total CPU time needed will be over 1-2 years. The higher n-ranges should be sieved by FUTURE resources. After all, who knows, you may not even want to search such a large n-range or all of your k's may already be eliminated. That's why we don't double-check ourselves right after finishing n=260K-400K or something. We do n=100K-260K now and will do n=260K-400K when computers are faster and cheaper in the future

People are assuming a static view of the future in their mathematical models for long-term sieving optimization. IMHO, that's a bad path to take.
The dat file for Sierpinski/Riesel Base 5 covers the range 0 <= n <= 2*10^6 (2 million). It's huge! The decision to make it that large was not mine, but I can think of several reasons for its size.

1. The SRB5 project will NOT be solved by n = 2*10^6, but one has to draw the line somewhere and one of the former moderators of the project drew the line there. Some crude calculations suggest that we will have about 90 sequences (out of an initial 500, roughly) remaining when/if we ever reach that point. With 90 seqences, another larger n value can be chosen for the second sieving phase of the project. In some sense, the large dat file lends a sense of stability to the project. A large dat file is something we won't have to worry about re-initializing or changing drastically for roughly 10 years, maybe much longer.

2. Sieving a larger range is more efficient, but how do you choose how large? I have no idea what the initial reasoning was, but looking at our dat file now, it seems that it suits the broad testing interests of the forum community. There are low-n values (n<1000) for people interested in factoring. There are slightly larger values (n approx. 70,000) for people interested in utilizing older, slower machines for double-checking. There are the current lowest untested n-values (260000-300000) that with some effort will net a prime on the top 5000 list (top 1000, actually). And then there are some huge values for the really ambitious tester. If you had the computational firepower, you could find a prime within our dat file that would rank in the top 20 largest primes. Or maybe you would be happier with a top 50 prime, or a top 100 or top 200...Depending on your level of commitment, we have something that could suit your interests. That's a nice aspect of the large dat file.

3. We actually want to solve the conjecture, eventually. By choosing a large n-value and heavily sieving we are doing a lot of work on behalf of future testers. Yes, it's boring, but it IS very useful.

These arguments are only meant as justification for large sieving ranges, not a justification of exclusively sieving.

Some arguments for testing without reaching the optimal sieve depth:

1. To keep a project going, one has to consider the psychology of the participants. Finding primes is fun. And regularly finding primes, I believe, is the best way to maintain interest in these prime search projects. Consider TPS. They were going gangbusters when they were finding reportable primes. Once they fell off the top 5000 list, it seems a lot of the interest dissipated. (I know primegrid is still testing for them, but that has always seemed like a particularly mindless sort of interest in a project's goals....)

2. Optimal sieve depth takes way too long to achieve, especially with large n-range dat files. The optimal sieve depth for the SRB5 dat file on an Intel Xeon 3.4 Ghz processor is when sieving removes a candidate every 43 hours. That's right, no typo there, 43 hours. At n=2*10^6, tests take 110 hours on these processors. (I calculated these benchmarks two years ago, with slightly older versions of the testing software, but I'm sure any new benchmarks would be of the same order of magnitude). At current testing n-levels, the tests take about an hour. And the expectation for finding a prime is roughly 8000 tests. 8000 hours = 335 days of testing on a single two-year old processor to find a (reasonably large) prime. It would actually be much shorter on today's quad core processors. At the optimal sieve depth, 8000 hours would only find 8000/43 = 186 factors. So, what sounds more appealing? 186 factors or a nice (reasonably large) prime, which coincidentally would remove 18,500 testing pairs from the dat file? (Admittedly, to be honest, if the optimal sieving depth were achieved, the 18,500 figure would be much smaller, but I doubt it would be less than 186).

To summarize:

1. There is nothing wrong with an exceedingly large dat file.

2. Finding primes is fun (and still useful to these projects), so don't discourage testing, even at sub-optimal sieve depths. Let the old battle-axes (like hhh on PSP, and myself on SRB5) perform and encourage the heavy sieving (as we understand its importance) but let the prime-hunters do their thing too.

Just my two cents on a much discussed topic that will not die...
masser is offline   Reply With Quote
Old 2008-07-06, 08:07   #26
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

2·3·7·23 Posts
Default

Quote:
Originally Posted by gd_barnes View Post
OK but P=5T was only meant as a very rough estimate of the optimal sieve depth for breaking off n=100K-150K or 100K-200K for bases 22 and 23. You took an example that I was giving as fact. It is not. It was an example of how it is POSSIBLE that over-sieving occurs. Have you actually LLR'd a candidate and verified the optimum sieve depth for breaking off n=100K-200K or 100K-150K?

Regardless, thank you for the files. I'll do what I can with them. The first thing I'll have to do is determine the optimum sieve depth. I will now unreserve them for you.

I'm confused. You have reserved Riesel base 3 to k=500M. Why the sudden interest in Sierp base 3? Don't you need to finish Riesel base 3 to k=500M first? Micha is already searching Sierp base 3 to k=200M.

Before starting on Sierp base 3, please report a status on Riesel base 3 or let us know if you're releasing it before starting on the Sierp side of things.


Thanks,
Gary
LLR regarding bases 22 and 23, hasn't been timed regarding optimal sieve depth for n=100-150K or n=100-200K.

About Riesel base 3, all primes for k<=500M for n<=500 has been found and verified. The ETA for the Riesel base 3 range is about 2 weeks, maybe 3. I'm only interested slightly in the Sierpinski base 3, because it actually reduces the manual work since I can do an emediate proof of the PRP and then continue to determined n, if the PRP is in fact not a prime. So less manual work because I don't have to split a ~25GB data file and sift through it and find the PRP's that was actually composites rather than beeing primes :) So in short terms, the interest for Sierpinski base 3 has come because a strict prime search (not PRP search) can be conducted. However if someone can come up with a solution such as I will not end up in an infinite "brillhart lehmer" loop where it tries different base and square roots, then of course I would like to continue with the Riesel base 3... but that is not the plan for now, since it involves a great deal of more manual work than I really feel like putting in any project at the moment.

On an other side note, I had actually noted that Michaf is doing k<=120-200M and I was considering to reserve something above that, if I decided to continue doing CRUS work

A lot has been said, and I hope it made sence!

Regards

KEP
KEP is offline   Reply With Quote
Old 2008-07-06, 16:55   #27
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

101×103 Posts
Default

Quote:
Originally Posted by KEP View Post
LLR regarding bases 22 and 23, hasn't been timed regarding optimal sieve depth for n=100-150K or n=100-200K.

About Riesel base 3, all primes for k<=500M for n<=500 has been found and verified. The ETA for the Riesel base 3 range is about 2 weeks, maybe 3. I'm only interested slightly in the Sierpinski base 3, because it actually reduces the manual work since I can do an emediate proof of the PRP and then continue to determined n, if the PRP is in fact not a prime. So less manual work because I don't have to split a ~25GB data file and sift through it and find the PRP's that was actually composites rather than beeing primes :) So in short terms, the interest for Sierpinski base 3 has come because a strict prime search (not PRP search) can be conducted. However if someone can come up with a solution such as I will not end up in an infinite "brillhart lehmer" loop where it tries different base and square roots, then of course I would like to continue with the Riesel base 3... but that is not the plan for now, since it involves a great deal of more manual work than I really feel like putting in any project at the moment.

On an other side note, I had actually noted that Michaf is doing k<=120-200M and I was considering to reserve something above that, if I decided to continue doing CRUS work

A lot has been said, and I hope it made sence!

Regards

KEP

Why are you doing primality proof tests on Riesel base 3? It's not necessary and takes way too long. I've suggested before to just do PRP tests, which are extremely fast. You can do primality proofs after the PRP tests and the manual work. It's unlikely you'll find any composites amongst the PRP's.

I'm going to unreserve Riesel base 3 because a test to n=500 is not something that others could start from because the k's remaining and files are too big to administer and send to people. It would take only a moderate amount of CPU time to start over for such an effort.

The only thing I'll have you reseved for is Sierp base 19 to n=100K.

If you want to reserve Sierp base 3, just specify an k-range but please keep it small (i.e. k<50M) and complete what you reserve to at least n=10K and preferably n=25K.


Thanks,
Gary

Last fiddled with by gd_barnes on 2008-07-06 at 16:56
gd_barnes is offline   Reply With Quote
Old 2008-07-06, 17:52   #28
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

2×3×7×23 Posts
Default

@ Gary:

I think you have quite misunderstood, here is the status of the Riesel Base 3 and how it was done:

1. PRP test all k<=500M to n<=500 (complete)
2. Verify the PRPs (complete)
3. Sieve the remaining k's for n>500 <25,000 (in progress)
4. Take the remaining k's to n<=25,000 (going to start soon)
5. Sieve the PRP's that were not really primes from n>=1 to n<=25,000
6. Test that sieve file and remove all k's being primes

As you see, the work is not only taken to n<=500 once it gets complete, but in the inicial face it was ... so please keep my reservation or else I'll lose 6-8 weeks of work down the drain and on nothing :(

Regards

KEP

Ps. I'm not going to work on anything else than the base 19 in the future, since I feel like I'm more or less of an obstical to this project... and of course interest is starting to fade ... happy hunting everyone...
KEP is offline   Reply With Quote
Old 2008-07-06, 20:33   #29
michaf
 
michaf's Avatar
 
Jan 2005

479 Posts
Default

Quote:
Originally Posted by masser View Post

1. There is nothing wrong with an exceedingly large dat file.

2. Finding primes is fun (and still useful to these projects), so don't discourage testing, even at sub-optimal sieve depths. Let the old battle-axes (like hhh on PSP, and myself on SRB5) perform and encourage the heavy sieving (as we understand its importance) but let the prime-hunters do their thing too.
Ad. 1. Nope, I must agree here for sure
Ad. 2. Optimal sieve depth is already achieved for the n's you are testing at SRB5, sieving more will only just help a little bit on the 'low' ranges.
i.e. finding a factor for 123*5^100000 has much less 'impact' then finding a prime for 123*5^1000000, even though both shave time from the 'total time to prp'-time. As long as you are willing to take the project to the 'end' (say 2M), sieving will have use if you get more factors per hour than that you can prp per hour (at 70% of the remaining range)
(Gee.. I hope I make at least SOME sense here :))
michaf is offline   Reply With Quote
Old 2008-07-06, 21:35   #30
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

101×103 Posts
Default

Quote:
Originally Posted by masser View Post
The dat file for Sierpinski/Riesel Base 5 covers the range 0 <= n <= 2*10^6 (2 million). It's huge! The decision to make it that large was not mine, but I can think of several reasons for its size.

1. The SRB5 project will NOT be solved by n = 2*10^6, but one has to draw the line somewhere and one of the former moderators of the project drew the line there. Some crude calculations suggest that we will have about 90 sequences (out of an initial 500, roughly) remaining when/if we ever reach that point. With 90 seqences, another larger n value can be chosen for the second sieving phase of the project. In some sense, the large dat file lends a sense of stability to the project. A large dat file is something we won't have to worry about re-initializing or changing drastically for roughly 10 years, maybe much longer.

2. Sieving a larger range is more efficient, but how do you choose how large? I have no idea what the initial reasoning was, but looking at our dat file now, it seems that it suits the broad testing interests of the forum community. There are low-n values (n<1000) for people interested in factoring. There are slightly larger values (n approx. 70,000) for people interested in utilizing older, slower machines for double-checking. There are the current lowest untested n-values (260000-300000) that with some effort will net a prime on the top 5000 list (top 1000, actually). And then there are some huge values for the really ambitious tester. If you had the computational firepower, you could find a prime within our dat file that would rank in the top 20 largest primes. Or maybe you would be happier with a top 50 prime, or a top 100 or top 200...Depending on your level of commitment, we have something that could suit your interests. That's a nice aspect of the large dat file.

3. We actually want to solve the conjecture, eventually. By choosing a large n-value and heavily sieving we are doing a lot of work on behalf of future testers. Yes, it's boring, but it IS very useful.

These arguments are only meant as justification for large sieving ranges, not a justification of exclusively sieving.

Some arguments for testing without reaching the optimal sieve depth:

1. To keep a project going, one has to consider the psychology of the participants. Finding primes is fun. And regularly finding primes, I believe, is the best way to maintain interest in these prime search projects. Consider TPS. They were going gangbusters when they were finding reportable primes. Once they fell off the top 5000 list, it seems a lot of the interest dissipated. (I know primegrid is still testing for them, but that has always seemed like a particularly mindless sort of interest in a project's goals....)

2. Optimal sieve depth takes way too long to achieve, especially with large n-range dat files. The optimal sieve depth for the SRB5 dat file on an Intel Xeon 3.4 Ghz processor is when sieving removes a candidate every 43 hours. That's right, no typo there, 43 hours. At n=2*10^6, tests take 110 hours on these processors. (I calculated these benchmarks two years ago, with slightly older versions of the testing software, but I'm sure any new benchmarks would be of the same order of magnitude). At current testing n-levels, the tests take about an hour. And the expectation for finding a prime is roughly 8000 tests. 8000 hours = 335 days of testing on a single two-year old processor to find a (reasonably large) prime. It would actually be much shorter on today's quad core processors. At the optimal sieve depth, 8000 hours would only find 8000/43 = 186 factors. So, what sounds more appealing? 186 factors or a nice (reasonably large) prime, which coincidentally would remove 18,500 testing pairs from the dat file? (Admittedly, to be honest, if the optimal sieving depth were achieved, the 18,500 figure would be much smaller, but I doubt it would be less than 186).

To summarize:

1. There is nothing wrong with an exceedingly large dat file.

2. Finding primes is fun (and still useful to these projects), so don't discourage testing, even at sub-optimal sieve depths. Let the old battle-axes (like hhh on PSP, and myself on SRB5) perform and encourage the heavy sieving (as we understand its importance) but let the prime-hunters do their thing too.

Just my two cents on a much discussed topic that will not die...

I'll be honest here. I'm not sure if you're against what I'm saying, in favor of, or neutral on the issue. Perhaps a little of all 3.

I'd like to encourage people to do what they want to do and not to force sieving down people's throats. That's what I feel RieselSieve has done. Up until the last year, they sieved things into the ground. You couldn't even go on their main page and find any kind of work except sieving to work on. You had to go to a forum.

I think you may have misinterperted my statement of sieving n-max vs. n-min of 4x as dissing on anyone who does so. Not so. I simply want to save current slower resources in favor of future faster resources if what is being sieved will take longer than ~2 years to complete; a time in which computer speeds typically double. I realize this is a radical point of view and requires some new ways of looking at things.

It's kind of like political debates about the economy or the world. Do we do what is best for the next 1-4 years or for the next 10-20 years or for the next 50-100 years...tough choices there. Since this is a recreational activity, I like to look at things over the next 1-4 years because 80-90% of people's specific interests change in that time. Heck, in 4 years, I may wish to get back into Pente, a board game I'm very passionate about.

In my personal life, I look at things over 10-20 years, and I feel that in the political and world arena, things should be looked at over 50-100 years. So IMHO, it's a matter of what pursuit or world-view you look at things.

I have to commend you on thinking that Base 5 can be proven. That's certainly taking the 50-100 year approach to things. But IMHO, it will not be proven in most of our lifetimes so if it were up to me, I would have suggested sieving n=100K-400K originally like we have started to do for Sierp base 6. In 2-3 years, when LLRing is nearing completion of that range, we can then start sieving n=400K-1M or 400K-1.6M with computers that should be twice as fast on far fewer k's than there will be now.

I really like your idea of having many different n-ranges available for testing. I've attempted to do that at NPLB with the different drives and individual-k testing, albeit with quite limited success on the latter of those 3.

I had no clue that you had all of these different ranges that could be tested. I thought you just had the two servers, each currently testing somewhere between n=200K and 300K, one of which I worked on for ~7-10 days. Where are these very small and very large n-ranges that can be tested? But I am confused. If there are small n-ranges to be tested, why is that? Wouldn't it better to knock those out and remove huge #'s of candidates from the huge dat file (if a prime is found) if your goal is to ultimately prove the conjectures? If the goal is to prove the conjectures, I think that small n-ranges should all be tested. But perhaps you've determined that the fun that it makes for everyone is more valuable than the proof. I agree there that it's not always easy to strike a balance between the two.

I agree that we need to let people do what they like to do but we also have to encourage them to do what we need them to do. I've probably not done a good job of that here at CRUS because I've been a little more focused on NPLB.


Gary

Last fiddled with by gd_barnes on 2008-07-06 at 21:38
gd_barnes is offline   Reply With Quote
Old 2008-07-06, 21:54   #31
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

101×103 Posts
Default

Quote:
Originally Posted by KEP View Post
@ Gary:

I think you have quite misunderstood, here is the status of the Riesel Base 3 and how it was done:

1. PRP test all k<=500M to n<=500 (complete)
2. Verify the PRPs (complete)
3. Sieve the remaining k's for n>500 <25,000 (in progress)
4. Take the remaining k's to n<=25,000 (going to start soon)
5. Sieve the PRP's that were not really primes from n>=1 to n<=25,000
6. Test that sieve file and remove all k's being primes

As you see, the work is not only taken to n<=500 once it gets complete, but in the inicial face it was ... so please keep my reservation or else I'll lose 6-8 weeks of work down the drain and on nothing :(

Regards

KEP

Ps. I'm not going to work on anything else than the base 19 in the future, since I feel like I'm more or less of an obstical to this project... and of course interest is starting to fade ... happy hunting everyone...

Apologies Kenneth. The communication interpretation was bad on my end this time. When you said you were taking on Sierp base 3 and had seached k<500M on Riesel base 3 to only k=500, I took that to mean that you were changing efforts.

You are not an obstacle. Where I'm coming from is the difficulty that it creates with many changing huge reservations that are coming in on your end. If you make a very large reservation and then stop it, it takes that much more time for me to coordinate it in the future. It really helps out if people complete what they reserve. I understand that things happen (computers crash, range takes much longer than you though, etc.) as long as they aren't happening too frequently.

It's also great if you want to sieve n=100K-1M, test n=100K-200K, and leave the remaining file for the team...or like on bases 22 and 23, if you just want to sieve, that's great but it's better if I know that in advance.

I don't see fading interest here. As any project has, there was great activity with the start of the new project and it's now simmered off to a relatively consistent level over the last 4 months or so. What I see now is a couple of people on vacation (Anon and Karsten) and a couple of people that have reduced their resources on prime searching for the hot summer months. You might want to peak at the reservations pages and see what you think. It's just that many people are searching at high n-ranges and only report a status once/month or so or when they find prime(s).

I will re-reserve k<500M on Riesel base 3 to n=25K for you. Sorry that I misunderstood you.


Gary

Last fiddled with by gd_barnes on 2008-07-06 at 21:57
gd_barnes is offline   Reply With Quote
Old 2008-07-06, 22:06   #32
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

1040310 Posts
Default

Quote:
Originally Posted by KEP View Post
So in short terms, the interest for Sierpinski base 3 has come because a strict prime search (not PRP search) can be conducted. However if someone can come up with a solution such as I will not end up in an infinite "brillhart lehmer" loop where it tries different base and square roots, then of course I would like to continue with the Riesel base 3... but that is not the plan for now, since it involves a great deal of more manual work than I really feel like putting in any project at the moment.

KEP

Kenneth,

I just now figured it out. The above is why I thought you unreserved Riesel base 3. First you said Riesel base 3 was ETA 3 weeks to completion, then you said 'then of course I would like to continue with the Riesel base 3...but that is not the plan for now since it involves a great deal more manual work...'.

It looks like you're saying that is not the plan for now to continue on Riesel base 3. I just thought I'd point out why I got confused. Anyway, it doesn't matter; I'm reserving it again for you.


Gary

Last fiddled with by gd_barnes on 2008-07-06 at 22:09
gd_barnes is offline   Reply With Quote
Old 2008-07-06, 23:49   #33
gd_barnes
 
gd_barnes's Avatar
 
May 2007
Kansas; USA

242438 Posts
Default

To all:

I messed up my prior calculation of k's remaining on Sierp base 19. I had the incorrect # of k's remaining at n=10K. I had used the number that were remainning at n=10.45K. This was a 30-prime difference and made a huge difference in the ensuing calculations.

The calculations were all correct to the best of my knowledge. It was the data going in that was incorrect, i.e. garbage in, garbage out.

This explains why KEP was finding primes at a faster rate than what I had expected.

To be consistent and not "cheat" , I still used the # of primes for n=10K-11.2K for the estimates and no subsequent primes that he found up to n=12.32K.

I edited the two incorrect posts above for the correct calculations.

This makes the estimate ~636 k's remaining at n=100K and reduces the estimated CPU years needed to complete Sierp base 19 to n=100K from 40 to 31.8. Further sieving on his file will likely reduce the total CPU time needed but not substantially.

But better...it reduces the estimated final prime to be found from n=~10^17-10^19 to n=~10^12-10^13. Now it's likely n<10 trillion. Wee!


Gary

Last fiddled with by gd_barnes on 2008-07-06 at 23:51
gd_barnes is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Conjectures with one k remaining rogue Conjectures 'R Us 109 2017-04-29 01:28
Estimating time needed for GNFS CRGreathouse Factoring 16 2014-03-10 03:40
Estimating time needed for GNFS CRGreathouse Factoring 0 2014-03-02 04:18
Predicting the needed time for high n-values? Rincewind Sierpinski/Riesel Base 5 4 2009-06-11 12:24
Time needed to factor a 150 digit number ladderbook Factoring 14 2008-11-27 13:02

All times are UTC. The time now is 09:10.


Tue Jul 27 09:10:56 UTC 2021 up 4 days, 3:39, 0 users, load averages: 2.21, 1.69, 1.60

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.