mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Conjectures 'R Us (https://www.mersenneforum.org/forumdisplay.php?f=81)
-   -   Riesel base 3 reservations/statuses/primes (https://www.mersenneforum.org/showthread.php?t=11151)

gd_barnes 2014-03-02 11:30

[QUOTE=Puzzle-Peter;368056]There are too many subranges in R3. Let's start consolidating. I'll take k=10M-20M to n=500k.[/QUOTE]

Good idea Peter. Probably already obvious to you...next would be k=2.05G-2.5G for n=25K-100K. There's a sieve file started for that one.

Puzzle-Peter 2014-03-02 19:45

[QUOTE=gd_barnes;368122]Good idea Peter. Probably already obvious to you...next would be k=2.05G-2.5G for n=25K-100K. There's a sieve file started for that one.[/QUOTE]

I wasn't sure which one to start with...

gd_barnes 2014-03-02 21:24

[QUOTE=Puzzle-Peter;368149]I wasn't sure which one to start with...[/QUOTE]

Of course it doesn't matter. The one you are working on will take much less time so makes sense to me.

Puzzle-Peter 2014-03-04 15:31

[QUOTE=gd_barnes;368122]...next would be k=2.05G-2.5G for n=25K-100K. There's a sieve file started for that one.[/QUOTE]

We all know I will do it, so I started sieving the file more deeply. Please reserve the range for me to n=100K.

VBCurtis 2014-03-11 06:58

I am interested in learning the limits of sr2sieve. Once Peter completes 2.05-2.5G to 100k, that will make 20M-4G a nice block of k's to sieve from n=100k to, say, 300 or 400k for future forum use.
Gary-
After Peter finishes, is there a low-effort way for you to produce a file I could input to srsieve to create this sieve? I could then convert to sr2 format and compare speeds, etc. If sr2 can't handle this many k's (in 8GB) or is slower, I can experiment to find the number of k's where regular srsieve is the same speed as sr2sieve.
-Curtis

gd_barnes 2014-03-11 08:04

[QUOTE=VBCurtis;368740]Gary-
After Peter finishes, is there a low-effort way for you to produce a file I could input to srsieve to create this sieve? I could then convert to sr2 format and compare speeds, etc. If sr2 can't handle this many k's (in 8GB) or is slower, I can experiment to find the number of k's where regular srsieve is the same speed as sr2sieve.
-Curtis[/QUOTE]

Yep, no problem. I've done a little experimenting of my own. I've found that in almost all scenarios regardless of the number of k's, sr2sieve with the -x option is faster than srsieve. But you might find differently with different hardware. My experience so far is that with the -x option, sr2sieve is only limited by your computer memory not the number of k's.

henryzz 2014-03-11 11:31

What is the memory use a function of? Does the number of candidates remaining affect it? The number of sequences? The max n? The number of subsequences?

Puzzle-Peter 2014-03-11 16:47

[QUOTE=VBCurtis;368740]Once Peter completes 2.05-2.5G to 100k, [/QUOTE]

This will probably take several weeks as I will not put too many cores on this task. If you want to start experimenting, you might try sieving k=4G-5G to n=100k.

VBCurtis 2014-03-12 03:23

Peter-
I am in no hurry whatsoever. I previously sieved 0.5-2G from 25k to 100k, handing the sieve off to Lennart after I ran LLR to 30k. I did go look up some data from that effort, but didn't fire up sr2sieve to see how much memory it took.

Someone in this thread commented that moving from 25k to 100k removes about 2/3rds of the k values, so a tripling of k-range ought to have about the same number of sequences; however, I'll be running a much larger n-range, which should again take more memory.

Henry- I think memory use is a function of number of sequences, number of subsequences (so low-weight k's sometimes use less memory than higher weight ones, per sequence), and n-range. I do not think the k-values matter when the -x flag is used.

To be honest, I am unsure if I'll sieve long enough to produce a file ready for testing at 100k, but even if I get halfway there it might give a boost to some shared-at-forum efforts to find some primes for base 3.

gd_barnes 2014-03-12 04:22

[QUOTE=VBCurtis;368809]Peter-
I am in no hurry whatsoever. I previously sieved 0.5-2G from 25k to 100k, handing the sieve off to Lennart after I ran LLR to 30k. I did go look up some data from that effort, but didn't fire up sr2sieve to see how much memory it took.

Someone in this thread commented that moving from 25k to 100k removes about 2/3rds of the k values, so a tripling of k-range ought to have about the same number of sequences; however, I'll be running a much larger n-range, which should again take more memory.

Henry- I think memory use is a function of number of sequences, number of subsequences (so low-weight k's sometimes use less memory than higher weight ones, per sequence), and n-range. I do not think the k-values matter when the -x flag is used.

To be honest, I am unsure if I'll sieve long enough to produce a file ready for testing at 100k, but even if I get halfway there it might give a boost to some shared-at-forum efforts to find some primes for base 3.[/QUOTE]

Comments on all 3 paras:

1. Testing n=25K to 100K generally removes ~70% of k-values base 3 regardless of the k values. It should be lower than that upon a subsequent quadrupling of the n-range for n=100K-400K simply because the remaining k's at n=100K will be lower avg. weight then remaining k's at n=25K. Removal of 63-67% of k's for n=100K-400K seems like a good estimate.

2. Here is my belief about the amount of memory used by sr2sieve: It is a function of how many k/n pairs are in the sieve file, which of course is a direct result of the # of k's, n-range, and weight of the k's. Although the program removes the pairs from memory as it finds factors (so that duplicate factors are not found for the same pair within a single instance of the program), the memory from those k/n pairs is not "cleared out" until the program stops. Once the program stops, if you want to reduce the amount of memory used, you have to physically remove the factors (using srfile) so that there are fewer pairs in the sieve file. (I have not looked at the program's code. This is only from experimentation and observation of memory use in the Windows task manager.)

3. I have generally found that the optimum sieve depth for sieving n=25K-100K approaches P=~100G using sr2sieve with the -x option if you don't break off any n-ranges, regardless of the number of k's. (Can be very machine dependent though.) This optimum doesn't change much as you increase the # of k's past a certain point because there is almost no efficiency gain in continuing to increase the number of k's. I believe the point of no efficiency gains is almost definitely fewer than 1000 k's and likely as low as 200-300 k's.

VBCurtis 2014-03-13 23:10

[QUOTE=gd_barnes;368811]
3. I have generally found that the optimum sieve depth for sieving n=25K-100K approaches P=~100G using sr2sieve with the -x option if you don't break off any n-ranges, regardless of the number of k's. (Can be very machine dependent though.) This optimum doesn't change much as you increase the # of k's past a certain point because there is almost no efficiency gain in continuing to increase the number of k's. I believe the point of no efficiency gains is almost definitely fewer than 1000 k's and likely as low as 200-300 k's.[/QUOTE]

This deserves some experimentation! If you're right, and groups of 300 k's are roughly as efficient as 3000 k's, we have no reason to run enormous sieves. I could take, say, 20M to 250M to optimal depth rather quickly to get the forum something to prime-search. I think I'll try 50, 100, 150, 200, 300, 400, etc k's until the sieve time per G rises linearly with number of k's (which would mean no efficiency gain).

I don't think it's worth it from an efficiency perspective to sieve ranges where less than half the tests will be conducted. If 63% of the k's would be removed by 400k, the n=399k tests would be run only 37% of the time. Also, sieving 100 to 300k followed by 300k to 1M is a nicer balance of ranges than 100-400 and 400-1M. I believe about half the sequences would produce a prime in each of 100-300 and 300-1M, though that's a very rough estimate.


All times are UTC. The time now is 23:01.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.