mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Conjectures 'R Us (https://www.mersenneforum.org/forumdisplay.php?f=81)
-   -   Software/instructions/questions (https://www.mersenneforum.org/showthread.php?t=9742)

Puzzle-Peter 2011-03-18 15:12

Sieving question
 
I started sieving for S63 using srsieve. I specified -f but for some reason did not get factor files, so now I have different srsieve.out files each with a number of candidates removed.

I can extract the removed candidates for each output file and then remove them all from the original sieve file using an awk script several times (once for each core), but it's a bit awkward. Can srfile or some other tool be used for this?

Thanks
Peter

rogue 2011-03-18 17:00

[QUOTE=Puzzle-Peter;255706]I started sieving for S63 using srsieve. I specified -f but for some reason did not get factor files, so now I have different srsieve.out files each with a number of candidates removed.

I can extract the removed candidates for each output file and then remove them all from the original sieve file using an awk script several times (once for each core), but it's a bit awkward. Can srfile or some other tool be used for this?

Thanks
Peter[/QUOTE]

srsieve will remove the factors automatically. You don't need to worry about saving off factors. Use srsieve -a to create the .abcd file needed for sr2sieve. That will create a factor file that srfile can use to remove factors. When using sr2sieve, you probably want to specify the -x option so that it doesn't generate Legendre symbols.

henryzz 2011-03-18 17:08

[QUOTE=rogue;255778]srsieve will remove the factors automatically. You don't need to worry about saving off factors. Use srsieve -a to create the .abcd file needed for sr2sieve. That will create a factor file that srfile can use to remove factors. When using sr2sieve, you probably want to specify the -x option so that it doesn't generate Legendre symbols.[/QUOTE]
I think he sieved multiple ranges of p with srsieve and wants to combine the sieve files.

Puzzle-Peter 2011-03-18 17:48

[QUOTE=henryzz;255784]I think he sieved multiple ranges of p with srsieve and wants to combine the sieve files.[/QUOTE]

Exactly. I had 8 cores working on different ranges of p (0-5e8, 5e8-10e8 etc.) so now I have 8 srsieve.out files and each of them has some candidates missing due to factors from its respective p range.

rogue 2011-03-18 18:42

[QUOTE=Puzzle-Peter;255810]Exactly. I had 8 cores working on different ranges of p (0-5e8, 5e8-10e8 etc.) so now I have 8 srsieve.out files and each of them has some candidates missing due to factors from its respective p range.[/QUOTE]

I see. I wouldn't have done it that way. I would have sieved the same range of p but for different ranges of k, to about 1e8 (some value above the max k), then switched to sr2sieve. sr2sieve will work that way you want srsieve to work.

Puzzle-Peter 2011-03-18 19:10

[QUOTE=rogue;255849]I see. I wouldn't have done it that way. I would have sieved the same range of p but for different ranges of k, to about 1e8 (some value above the max k), then switched to sr2sieve. sr2sieve will work that way you want srsieve to work.[/QUOTE]

It works rather nicely that way and thanks to KEP I now know how to handle this. I'll be off for the next few days, but I'll let you know about my success (knock on wood) here when I'm back.

Thanks for all the advice you people are offering!

Peter

henryzz 2011-04-21 17:32

The link in post 4 to llr is for 3.7.1c

grobie 2012-06-21 17:55

I am new to pfgw read & re-read instuction page but can get pfgw to test my prps. Can someone tell me the comand line to test my prp file. I have used PFGW pfgw.txt -f0 -tp

firejuggler 2012-06-21 18:05

you just have to put the filename at the end.
PFGW -f0 -tp pfgw.txt

grobie 2012-06-21 18:12

[QUOTE=firejuggler;302880]you just have to put the filename at the end.
PFGW -f0 -tp pfgw.txt[/QUOTE]

same thing as soon as I hit start I get message box that says This rage of work has been completed Please close this message box ans start a new range. but that message pops up no sooner than I hit the start button. Got to go to work now will read back at midnight est

Puzzle-Peter 2012-06-21 18:43

That sounds like PFGW thinks it processed all the work in your input file. Could you post the first handful of lines from your input file? Maybe a corrupt header or an extremely short test that does not trigger any output? Have you tried -l to get a logfile even when there is no PRP?

rogue 2012-06-21 23:19

If there is a pgw.ini file, delete it. You can add -Cverbose for verbose logging to the console.

grobie 2012-06-22 05:02

Still no luck. all that is in the pfgw.log is the 1 prp but for the life of me cant get it to test it.

kar_bon 2012-06-22 07:50

[QUOTE=grobie;302952]Still no luck. all that is in the pfgw.log is the 1 prp but for the life of me cant get it to test it.[/QUOTE]

The *.log-file contains the PRPs found and the *-prime.log the primes found, don't mix it.

Example:
the testfile "test.txt" contains two entries:
[code]3999*130^72-1
100542585*2^35-1[/code]

Calling "pfgw -l -f test.txt" will produce these:
Screen output:
[code]
Output logging to file pfgw.out
No factoring at all, not even trivial division
Switching to Exponentiating using GMP
3999*130^72-1 is 3-PRP! (0.0046s+0.0008s)
100542585*2^35-1 is 3-PRP! (0.0000s+0.0033s)
[/code]

Because only PRPs found, there is "pfgw.log", which contains:
[code]
3999*130^72-1
100542585*2^35-1
[/code]

"pfgw.out" contains all processed candidates.

The "pfgw.ini" contains a line with
[code]
CurLineNum=3
[/code]
which says, all is done.

Calling "pfgw -l -tp -f test.txt" will produce these:
Screen output:
[code]
Output logging to file pfgw.out
No factoring at all, not even trivial division
Primality testing 3999*130^72-1 [N+1, Brillhart-Lehmer-Selfridge]
Running N+1 test using discriminant 7, base 1+sqrt(7)
Calling Brillhart-Lehmer-Selfridge with factored part 51.45%
3999*130^72-1 is prime! (0.0257s+0.0031s)
Primality testing 100542585*2^35-1 [N+1, Brillhart-Lehmer-Selfridge]
Running N+1 test using discriminant 7, base 1+sqrt(7)
Calling Brillhart-Lehmer-Selfridge with factored part 57.38%
100542585*2^35-1 is prime! (0.0058s+0.0046s)
[/code]

Now 2 primes were found and "pfgw-prime.log" contains:
[code]
3999*130^72-1
100542585*2^35-1
[/code]

There's no "pfgw.log" because no PRPs were found!

Before running pfgw.exe, delete all files or copy them in another folder.

grobie 2012-06-22 12:19

Thanks everyone. What I did was delete my .ini file then I had to copy and paste the pfgw.log with my prp's to a new file test.txt then it worked.

CGKIII 2012-08-28 01:18

Trouble with {number_primes} option in PFGW
 
After lurking a bit and getting tired of BOINC doing its own thing, I've come to CRUS to do some cool stuff.

Once I get all the software and such figured out, I'd like to wind up reserving one of the recommended bases (S391 to 25k). However, I've got a couple questions.

First, I can't seem to get the {number_primes,$a,1} option working correctly.

I'm using the following input file (modified to test just this functionality)

-------------

ABCD 1456*391^$a+1 [2529] // {number_primes,$a,1}
1539
8
3
7
20
7
30
2
1
14
6
7
6
9


And then I throw this into WinPFGW:
pfgw -f0 -l -t sieve-sierp-base391-2.5K-25Kcgk2.txt

My understanding of the {number_primes} functionality is that it should stop after the third test (where it finds that 1456*391^4076+1 is a prime), but it doesn't.

With the standard screen logging, it says that "ABCD File Processing for at most 1 Primes."

If someone could explain where my error is here, that would be fantastic.


Secondly, how do I leverage multiple cores? Is it as simple as splitting up the range and opening multiple instances of WinPFGW?

Thanks a lot,
Caz

Mathew 2012-08-28 03:10

Welcome CGKIII,

I am not sure if pfgw likes the .abcd fromat (this is the one you are currently using). What you can do instead of using the .abcd format is use the pfgw format by doing the following command:

[CODE]srfile -w sieve-sierp-base391-2.5K-25Kcgk2.txt[/CODE]This will create a file called sr_391.pfgw (you can rename it to what you like). Edit the first line of this file to have
[CODE]//{number_primes,$a,1}[/CODE] same as you did with the .abcd

This will then stop on:
[CODE]1456*391^4076+1 is 3-PRP![/CODE][QUOTE=CGKIII;309469]
Secondly, how do I leverage multiple cores? Is it as simple as splitting up the range and opening multiple instances of WinPFGW?[/QUOTE]Yes this is one method and the method I would recommend for understanding the software. Another method is using [URL="http://www.mersenneforum.org/showthread.php?t=16424"]PRPnet[/URL], but it may not be the efficient method for the S391 2.5K-25K reservation.

Also,
You do not want to do
[CODE]pfgw -f0 -l -t pfgw_filename[/CODE]You would want to do
[CODE]pfgw -f0 -l pfgw_filename[/CODE] without the -t. You only want to do the -t option on the numbers found in the pfgw.log file.

Hope this helps,

Mathew

CGKIII 2012-08-28 04:15

Got it. Thanks a lot Mathew. Currently up and running on 2 cores with a manual split. Later this week I'll look more into PRPNet, since I've got another machine or two and would like to get things a bit more automated.

MyDogBuster 2012-08-28 04:26

Welcome CGKIII. There are many ways to do this testing. I personally use PRPNET, but is is tricky to set up unless you already know how to install MYSQL.

An alternative is that we do have private PRPNET ports set up for the server side of PRPNET. You would only have to get the client side running pointing to that port. You could then have multiple cores going against your range.

I have done ranges for 2.5K-25K using this method and it works just fine as long as you configure the clients for at least a cache of 10. Any lower and the server will get bogged down because the tests @ 2.5 k are only about 1 sec in duration.

I am not a PFGW or WinPFGW guru but I'm sure there are enough people around that can get you rolling if that's what you choose. I do know that if you bust it into multiple cores, all tests for a k must stay together otherwise if you find a prime on one core, the other cores won't know about it. //(number_primes,$a,1) is telling PFGW to flag the primedk and don't test that k anymore. The program doesn't actually stop.

Again, welcome and ask all the questions you need to. :hello:

gd_barnes 2012-08-28 04:38

Hi CGK,

I'm sorry I didn't respond to your PM. I was out of town until late yesterday with limited online time.

It looks like Ian (MyDogBuster) and Mathew got you started. I wanted to add one more thing: If you decide to reserve S391, there is a sieve file already available. The file will probably need to be sieved further but it would be a very good start. Take a look at our Sierp reservations page at [URL]http://www.noprimeleftbehind.net/crus/Sierp-conjecture-reserves.htm[/URL]. Go down to base 391 and there will be a link to the file out to the right. To make your reservation "official", just post it in the bases 251-500 thread.

Good luck! :smile:


Gary

MyDogBuster 2012-08-28 04:51

I also need to point out that PRPNET will run on Linux or Windows. I know nothing of Linux but could probably get you started on Windows. Another poster here, Rogue, actually wrote PRPNET so we have the ultimate source of knowledge.

CGKIII 2012-09-15 21:58

Is sieving machine-dependent?

So I've run the new base script on S282 to n = 2500 (going to take it to n = 25,000). Now, it's my understanding that I run srsieve for the remaining k's up to - P 100e6 (magic number I saw somewhere and wrote down). And then I run sr2sieve until I get to a removal rate ~ time it takes to do an LLR test at about 60/70% of the range, for an "average" k.

I've got four machines with four different average testing times. If I sieve on one machine and stop at the average testing time for that one, will that roughly correspond to the same sieve depth, had I used another machine? I assume so, because we have sieve files that get passed back and forth between people, but if that's not the case, then I'd like to figure out how to choose which machine will do the sieving.

gd_barnes 2012-09-15 23:31

[QUOTE=CGKIII;311764]Is sieving machine-dependent?

So I've run the new base script on S282 to n = 2500 (going to take it to n = 25,000). Now, it's my understanding that I run srsieve for the remaining k's up to - P 100e6 (magic number I saw somewhere and wrote down). And then I run sr2sieve until I get to a removal rate ~ time it takes to do an LLR test at about 60/70% of the range, for an "average" k.

I've got four machines with four different average testing times. If I sieve on one machine and stop at the average testing time for that one, will that roughly correspond to the same sieve depth, had I used another machine? I assume so, because we have sieve files that get passed back and forth between people, but if that's not the case, then I'd like to figure out how to choose which machine will do the sieving.[/QUOTE]

Not necessarily. Others can elaborate more but I can say that some machines are better for sieving and others for primality testing. Choose which machine is the fastest siever for all sieving. To be more specific, you should probably choose to calculate the optimum sieve depth while sieving on your fastest sieving machine with test times gleaned from your average primality testing machine. You only want to sieve on your fastest sieving machine because sieving is only about 5-10% of the total effort. I personally have only one machine that I sieve on most of the time and it can easily keep the other 11 machines busy with primality testing work. Doing it that way minimizes the overall CPU effort needed.

CGKIII 2012-09-21 09:03

I'd like to modify the new base script to handle not-quite-new bases, since it seems to do a bunch of nice things already.

Is it as simple as adding "DIM min_n, xxxx" to the section where all the variables are declared, and then change "SET n, 0" to "SET n, min_n"

Testing it myself might be the way to go, but I'll be out of town this weekend and won't have time to troubleshoot.

I want to get more granular with my sieving (rather than optimizing sr2sieve removal rates for a large range, chunk it better and optimize within those smaller chunks), but I don't currently have a good way to use PFGW to test from A to B, where A != 0.

henryzz 2012-09-21 11:15

[QUOTE=CGKIII;312286]I'd like to modify the new base script to handle not-quite-new bases, since it seems to do a bunch of nice things already.

Is it as simple as adding "DIM min_n, xxxx" to the section where all the variables are declared, and then change "SET n, 0" to "SET n, min_n"

Testing it myself might be the way to go, but I'll be out of town this weekend and won't have time to troubleshoot.

I want to get more granular with my sieving (rather than optimizing sr2sieve removal rates for a large range, chunk it better and optimize within those smaller chunks), but I don't currently have a good way to use PFGW to test from A to B, where A != 0.[/QUOTE]

It wouldn't know for which ks primes had already been found for. It would be nice to write a script that would read in what ks are remaining, test them for a range of n(possibly just 1 n) and write the remaining ks to file. We would need a way of generating the starting ks. PFGW unfortunately seems reasonably slow at doing this for large cks. I am not sure whether this is because it is a large job or because it is a scripting language. I suspect the latter.

rogue 2012-09-21 12:47

What are you trying to accomplish? IMO, skipping sieving will cost you a significant amount of time.

gd_barnes 2012-09-21 17:53

[QUOTE=CGKIII;312286]I'd like to modify the new base script to handle not-quite-new bases, since it seems to do a bunch of nice things already.

Is it as simple as adding "DIM min_n, xxxx" to the section where all the variables are declared, and then change "SET n, 0" to "SET n, min_n"

Testing it myself might be the way to go, but I'll be out of town this weekend and won't have time to troubleshoot.

I want to get more granular with my sieving (rather than optimizing sr2sieve removal rates for a large range, chunk it better and optimize within those smaller chunks), but I don't currently have a good way to use PFGW to test from A to B, where A != 0.[/QUOTE]

[QUOTE=henryzz;312291]It wouldn't know for which ks primes had already been found for.[/QUOTE]

[QUOTE=rogue;312294]IMO, skipping sieving will cost you a significant amount of time.[/QUOTE]


I have to agree with David (henryzz) and Mark here. I do not suggest running the script for anything other than n=1 to 2500 or whatever depth you determine is best to start a new base. 2 reasons:

1. The script is not designed to handle specific k's remaining. It would need significant redesign to accomplish that.

2. Sieving the k's remaining at n=2500 and testing the resultant sieve file with LLR/PFGW/PRPnet with the stop-on-prime option set on is much more efficient.

CGKIII 2012-09-21 18:24

I don't want to skip sieving, I want to chunk it.

Run the new base script to n = 1500. Sieve with srsieve to n = 3000. Sieve with sr2sieve to optimal depth for n = 3000. Test the resultant candidates.

For those that remain, sieve with srsieve to n = 6250. Sieve with sr2sieve to optimal depth for n = 6250. Test the resultant candidates.

For those that remain, sieve to n = 12500...For those that remain, sieve to n = 25000.

I think by writing it out that way, I figured out what I missed (didn't sleep last night). I can just run srsieve on the remaining candidates like normal (haven't yet had to do this). Take the old set of remaining bases (pl_remain output from the script), remove all which were primed, and then use that as the input to the next round with srsieve.

Depending on the ck and total time, I might not use that many chunks, but even two seems better than my current method, where one of my machines is running sr2sieve from n = 1000 to n = 25000. Sorry for the confusion.

Puzzle-Peter 2012-09-21 19:27

As far as i know (and have experienced) the most effective way is to sieve as many k's and as large an n-range as you are going to test (except for really huge numbers of k's).

So I (and probably everybody else) would recommend using one big sieve file for n=1000 to 25000. If you have many k's, sieve to the optimal depth for, say, n=5000, test to n=5000, remove all primed k's from the sieve file (quick and easy with srfile -d), continue sieving to the optimal depth for n=10000, test to n=10000, remove k's and so on...

EDIT: I just realized that's probably what you meant. But I'm not sure if you were planning to start new sieve files each time, i.e. have only n=1000 to 5000 in the first sieve file, n=5000 to 10000 in the second and so on. It's more efficient to start with the whole range and take out k's that are primed.

rogue 2012-09-21 19:38

I understand now.

This is what you should consider doing.

1) Take all k and sieve from n to N where N = max N you intend to test as part of your overall reservation. Sieve to 1e6 or some value of p > max N.
2) Use sr2sieve to sieve to the optimal rate of k*b^m+/-1 where n < m < N.
3) Run "srfile -k factors.txt -w sr_xyz.pfgw" to remove k/n that have a factor.
4) When sieving is done, run pfgw with number_primes (or llr with its corresponding option) for the range of n to m.
5) Run "srfile -d pfgw.log" or "srfile -d pfgw-prime.log -w sr_xyx.pfgw" (again, assuming use of pfgw). This will eliminate sequences from your sieve file for which you found a prime in step 4.
6) Set n = m.
7) Repeat from step 2.

This will reduce the number of steps you need to do to complete the range. Will you sieve some k to a far higher n than you need? Yes, but increasing the size of the range of n isn't that costly.

gd_barnes 2012-09-21 20:05

Adding some details to what Peter and Mark said, here is what I do:

Run the script to n=2500, sieve all remaining k's for n=2500-25K to optimal depth for the range of n=2500-10K, test to n=10K, remove k's primed and the the range of n=2500-10K in the big sieve file, sieve n=10K-25K to optimal depth, and finally test n=10K-25K.

CPU-wise, you might be able to make a case for breaking off more pieces but IMHO it's too much hassle to do so.

CGKIII 2012-09-28 19:12

Got it. Thanks everyone.

When running sr2sieve, I'm seeing removal rates oscillate quite a bit. Over some time period (scrolling up a number of pages in my terminal window), I see it oscillating from 6 seconds per factor up to 14 seconds per factor, up and down again pretty often. Optimally, I would want to stop it when the removal rate is ~10 seconds, but it seems like I don't want to stop when it first hits the optimal rate, but some time after. Is there any guidance here?

rogue 2012-09-28 20:10

[QUOTE=CGKIII;313085]Got it. Thanks everyone.

When running sr2sieve, I'm seeing removal rates oscillate quite a bit. Over some time period (scrolling up a number of pages in my terminal window), I see it oscillating from 6 seconds per factor up to 14 seconds per factor, up and down again pretty often. Optimally, I would want to stop it when the removal rate is ~10 seconds, but it seems like I don't want to stop when it first hits the optimal rate, but some time after. Is there any guidance here?[/QUOTE]

The difficulty is that the removal rate is based upon the previous 30 factors found. If you have a compiler (MinGW or gcc), then you can change that to a higher value and continue sieving. It will flatten out the removal rate. If you are running things that are stealing cycles, that would also impact the rate. The software runs at a low/idle priority so the removal rate will be affected if other programs steal cycles from it.

VBCurtis 2012-12-29 18:08

[QUOTE=gd_barnes;219829]I'll clarify on that:

What I was quoting was more like a probability of prime closer to 70%. Honestly I don't know what percentage chance of prime is the best to sieve.
Gary[/QUOTE]

tl;dr version: Pick a sieve range that gives 60-65% chance of prime.

Assume sieve time scales with the square root of n-range; that is, sieving 100k to 500k would take sqrt(2) times as long as sieving 100k to 300k, while sieving 300k to 500k would take the same time as sieving 100k to 300k.

Let's say there is a 60% chance of prime in 100k-300k.

Compare sieving 100-500k at once, versus 100-300k followed by 300-500k:
60% of the time, we find a prime in 100-300, and the extra effort of 300-500 is wasted. This extra effort was 40% of the time taken to sieve 100-300.
40% of the time, we don't find a prime in 100-300, and having a file to 500k saves 60% of the time taken to sieve 100-300 versus starting a new sieve 300-500.

So, if we consider sieving to the point where a file has a 60% chance of prime, we are ambivalent between sieving there or sieving twice as deep.

This would seem to suggest the optimal sieve range is somewhere between those two points- that is, at a point higher than 60% chance of prime. However, it also illustrates that it hardy matters- we spend so little time sieving versus testing, and the efficiency curve is VERY broad around the optimal decisions both for n-range and p-depth.

My intuition on this is that a file that produces 1 expected prime (that is, a 63% chance of prime) is optimal. I think my logic shows 60% is on the low side of optimal, but I lack the reasoning presently to demonstrate 63% is optimal.
-Curtis

KEP 2013-08-11 19:38

Simple question:

Is srsieve version 1.0.5 working correct with removing algebraric factors?

I'm asking because I'm currently in the process of sieving 4 k's to p=1P, but I'll have to start from scratch if too many n's has been removed. So can someone elaborate and tell me, weather or not this version of srsieve is working correct or not?

Regards

KEP

rogue 2013-08-11 20:01

[QUOTE=KEP;349172]Is srsieve version 1.0.5 working correct with removing algebraric factors?

I'm asking because I'm currently in the process of sieving 4 k's to p=1P, but I'll have to start from scratch if too many n's has been removed. So can someone elaborate and tell me, weather or not this version of srsieve is working correct or not?[/QUOTE]

Nobody has reported any issues. It would be easy enough to modify the code to print all k/b/n removed due to algebraic factorizations so that you could verify.

KEP 2013-08-12 11:39

[QUOTE=rogue;349177]Nobody has reported any issues. It would be easy enough to modify the code to print all k/b/n removed due to algebraic factorizations so that you could verify.[/QUOTE]

Well I'm good as it is now, it tells me an amount of n's removed by algebraric factorization. In addition mathew steine conquered that the amount of candidates seemed to be ok, since the amount of candidates remaining was about 700 candidates lower after resieving with the never version of srsieve. So now I've two persons that has told me, that to the best of their knowledge there is no apparent issues with the software and that is enough to reassure me that everything is all-right :)

Thanks for your reply.

KEP

TheCount 2013-10-01 05:14

Split the Sieve file
 
I've started testing 20*620^n-1 from 100k to n=200k. I am currently at about 2.7% of the range using one CPU core and PFGW. At this rate it will take 200+ days to finish! I have a 6 core CPU. Reading this thread I can either setup a PRPnet server on my PC, use a private port on the NPLB server or manually split the sieving file into 6 parts and feed each core. For just starting out I want to try the later. Just splitting the sieve file into 6 equal parts by increasing n is dumb because if the prime is 10% through the range I won't know until each core has done 10% of the work (ignoring increasing WU size). Plus the cores doing the smaller n will finish first etc. A better way is to take each 6th line of the sieve file and put it into a separate file, one file for each core. Does someone have a Windows DOS batch file that can do this?

I hope reporting 6 results files crunched this way won't be a problem?

LaurV 2013-10-01 05:49

If you used NewPgen to make the file, it can automatically split it in 6, to be checked for primality in 6 different computers (or same computer, 6 cores). It can automatically generate six pfgw-type files. I just started sieving "k*b^n-1 with k fixed", from n=100000 to 200000 and after 5 minutes of sieving there are about 5800 candidates remaining. Most probably the number is much lower if you sieve higher. So, I still don't get it why it takes so long to test them for primality in your machine. How long does pfgw takes for one test?

TheCount 2013-10-01 07:26

I have ~3800 candidates left after sieved to P=5T. At 60% of range takes 80 minutes per test on my AMD X6 1100T clocked at 3.5GHz.
80 mins x 3800 candidates = 211 days. This is for a 1k'er.
NewPgen isn't in the CRUS Pack. I got the sieve file off the CRUS web page: [URL]http://www.noprimeleftbehind.net/crus/Riesel-conjecture-reserves.htm[/URL]
Sure I can sieve 6 different n ranges and crunch that separately on each core, but its not very efficient as I argued in my post.
Where can I find NewPgen?

LaurV 2013-10-01 07:51

Then you have a well-sieved file, won't need another, and won't need NewPgen ([URL="http://lmgtfy.com/?q=newpgen"]by the way[/URL]).

What you need is a card-dealer, [URL="http://www.silisoftware.com/tools/split.php"]like this[/URL]. Paste your text inside, select 6 stacks, there you go, paste them back in 6 files. If your file has some header, don't forget to put the header in all files.

(for a small test, type numbers from 1 to 10, each on a line, select 3 stacks, deal, etc)

(edit: related to NewPgen, which I forgot it running since the time of my first post in this thread (what an idiot!), it reached 5G2, it still has 4930 candidates, and it is still eliminating one candidate every 34 seconds. I have no idea how tough is getting further, I will stop it, generally the time increases exponentially when the candidates base is reduced, but if your computer has a fast memory, you may get to eliminate one candidate faster than the 80 minutes, if you continue sieving from the file you have. This would worth a test, in my opinion).

gd_barnes 2013-10-01 08:07

LaurV, please do not recommend to a new searcher on this project to sieve with NewPGen. It's the slowest possible way to sieve on this project. We use srsieve/sr2sieve -or- for 1k, sr1sieve...far faster. Thank you.

The card-dealer link is very good. I always used Excel to split it up in such a manner. TheCount, I would recommend using LaurV's link to split your file up into 6 separate parts where each one of them will take about the same amount of time to test and little CPU time will be wasted if you find a prime. That way, you can be done in ~35 days if you have your computer running 24x7 and don't find a prime.

One thing that surprises most new people here is the amount of time that it takes to test ranges. With the project being nearly 6 years old, all of the "low lying fruit" has already been tested.

LaurV 2013-10-01 08:26

Whoops.... :blush:
(I hope I will get some mitigation on the fact that when I replied, I didn't know where the file comes from).

The link to the splitter is not my merit either, I have it from [URL="http://www.mersenne.ca/"]James' site[/URL] (somewhere in the right, under "work balancer" or so). I use a small perl script (one liner) to do this, but I didn't want to bother the OP about perl.

@OP: please forget what I said about NewPgen, and sorry for inducing you in a wrong direction. For me NewPgen is still the fastest way to sieve :razz:

gd_barnes 2013-10-01 08:32

[QUOTE=TheCount;354731]
I hope reporting 6 results files crunched this way won't be a problem?[/QUOTE]

No problem. One thing that I eventually suggest is to take the 6 files and sort them back by n-value by parsing out the n before sending one big sorted file to me.

rogue 2013-10-01 10:58

[QUOTE=TheCount;354731]I've started testing 20*620^n-1 from 100k to n=200k. I am currently at about 2.7% of the range using one CPU core and PFGW. At this rate it will take 200+ days to finish! I have a 6 core CPU. Reading this thread I can either setup a PRPnet server on my PC, use a private port on the NPLB server or manually split the sieving file into 6 parts and feed each core. For just starting out I want to try the later. Just splitting the sieve file into 6 equal parts by increasing n is dumb because if the prime is 10% through the range I won't know until each core has done 10% of the work (ignoring increasing WU size). Plus the cores doing the smaller n will finish first etc. A better way is to take each 6th line of the sieve file and put it into a separate file, one file for each core. Does someone have a Windows DOS batch file that can do this?[/QUOTE]

For you I would suggest using PRPNet and this reason. If one core finds a prime the others will be doing PRP tests for no reasons. With PRPNet you can load up a second conjecture while working on the first one without having to touch clients.

kar_bon 2013-10-01 11:52

[QUOTE=TheCount;354731]A better way is to take each 6th line of the sieve file and put it into a separate file, one file for each core. Does someone have a Windows DOS batch file that can do this?[/QUOTE]

I've done this for a 500 k-range with ~3M candidates allover.

You can do this:
- say your file with all candidates named "all.txt" (in newpgen-format: first line header, other lines k-n-pairs)
- get "gawk.exe" (you can find it [url=http://www.rieselprime.de/Others/OESall.zip]here[/url])
- create a file called "do.awk" with following content:
[code]
BEGIN{ getline line; i=1}
{ if (head[i] == 0)
{ print line >>"all_"i".txt"
head[i]=1
}
print $0 >>"all_"i".txt"
i++
if (i==7) i=1
}
[/code]

- calling the command "gawk -f do.awk all.txt" will create 6 files (all_1.txt, all_2.txt,...) with all candidates distributed (every 6.th pair is in the same file) and the header in the first line.

TheCount 2013-10-01 14:33

deal sieve file
 
Thanks for all the input. I will try PRPNet later down the track. I am only using 1 PC on CRUS to begin with.
I realised NewPGen is old, I won't use it.
Using cut and paste with the mouse is error prone. I don't want to do it.
I'll sort the result files into one big file before reporting back.
I created my own batch file and called it deal_file.bat:
[code]
@echo off
setlocal enabledelayedexpansion
if "%1"=="" (
set /a cores=2
) else (
set /a cores=%1
)
if "%2"=="" (
set input_file=input.txt
) else (
set input_file=%2
)
echo dealing lines of %input_file% into !cores! split_*.txt files

set /p firstline=<%input_file%
echo the file header: "%firstline%" will be the header of each split_*.txt file
set /a header=1
set /a initial_deal=1
set /a nbr=0
for /f "tokens=*" %%a in (%input_file%) do (
if !header! EQU 1 (
set /a header=0
) else (
if !initial_deal! EQU 1 (
if exist split_!nbr!.txt (
del split_!nbr!.txt
)
echo %firstline% >>split_!nbr!.txt
)
echo %%a >>split_!nbr!.txt
set /a nbr+=1
if !nbr! GEQ !cores! (
set /a nbr=0
set /a initial_deal=0
)
)
)
[/code]I made it a bit idiot proof as a screw up could have big consequences.
Now I can call it like this:
> deal_file.bat 6 sieve-riesel-base620-100K-200K.txt
Not as elegant as Pearl or gawk but does the trick.

LaurV 2013-10-02 05:54

:goodposting: :tu: :applause:

That is the spirit!

gd_barnes 2013-10-02 07:13

Nice work Count! :smile:

kar_bon 2013-10-02 08:41

[QUOTE=TheCount;354773]Not as elegant as Pearl or gawk but does the trick.[/QUOTE]

An old DOSler... long ago I saw good old DOS programs, which are mostly tricky to do the work but functioning well.
I've done much work the same way (like the 'new' LLRnet, 3 years ago now... wow.. time's running fast).
*hat up*

rebirther 2014-11-01 15:48

Some links and the cruspack need to be updated, some link are not working anymore and the apps are very old.

gd_barnes 2014-12-03 21:29

1 Attachment(s)
Can someone please help me convert the attached file into a format that is readable by Notepad or Wordpad and let me know how you did it? Alternatively what software can read this file and can I easily download such software?

I have 6000-plus of these teeny files for the recent BOINC effort on R784 and need to get them converted into a format that I can read. Reb is enlisting the help of someone to help him get them into one big file but I don't think that will resolve the issue of me being able to read it/them.

Batalov 2014-12-03 21:38

The little file "srbase784_wu_2_0" is actually a zipped file. "unzip -p srbase784_wu_2_0" will dump the single line that it contains as text.

If these were gzipped (not zipped), you could "cat * > all.gz" and then gunzip or zcat or whatever.

gd_barnes 2014-12-03 21:52

[QUOTE=Batalov;389031]The little file "srbase784_wu_2_0" is actually a zipped file. "unzip -p srbase784_wu_2_0" will dump the single line that it contains as text.

If these were gzipped (not zipped), you could "cat * > all.gz" and then gunzip or zcat or whatever.[/QUOTE]

Thanks. I was showing my Linux/Windows ignorance. I wasn't aware that it was a Linux zipped file and was trying to open and unzip it on my Windows laptop. When Reb is able to get them all into one file for me, this won't be a big problem just dumping it into one big text file.

Batalov 2014-12-03 22:12

zipping is cross-platform. The only thing that stumped you was the absence of extension. Add extension (manually) and you will open it in Windows just as well. (Or you can right click and "open with ::" 7zFM; install 7-zip from 7-zip.org.)

These files are just like those that live in anyone's (who is running BOINC) C:\ProgramData\BOINC\...slots\
For some reason they are very common in this framework, and "conveniently" stored without extension. I guess that this is both economy of letters and some trivial obfuscation. Add to that that most users won't even know that they have the C:\ProgramData\ folder (it is by default invisible) -- and you have some protection from people messing with results, e.g. to get some "cobblestones"; to get [I]more [/I]than their neighbor. ;-) Oh, vanity of vanities... :rolleyes:

gd_barnes 2014-12-03 22:19

Thanks Serge. Good to know. Yep the lack of extension fooled me...not hard to do.

yoyo 2014-12-04 19:08

file <filename> on the command line @Linux should also tell which format it is, independent from its extension.

wombatman 2014-12-04 21:39

Before I start making reservations, I wanted to see if I understand the process correctly for proving a Riesel conjecture.

Using Base 1025, with the only remaining k of 8, I would use:

[CODE]srsieve -a -n 25e3 -N100e3 -P 1e6 -m 4e9 input.txt[/CODE] where input.txt has the line "8*1025^n-1"

Taking the file that srsieve put out, I would run PFGW (WinPFGW) and use "-f0 -tp -l" to try and prove the primes.

Is this all correct?

Batalov 2014-12-04 22:01

You run srsieve to a low limit like you wrote. Then you run sr1sieve to a much higher limit (the rule of thumb is to know how long the PFGW/LLR test will take in your future range and run sr1sieve until it removes candidates approximately as fast).

When you have a collection of[I] k[/I]'s for the particular conjecture, you do many srsieve steps (or just one - using pl_remain.txt like you used input.txt), then combine all files, and run sr2sieve on them altogether as above.

For [U]a few[/U] special[I] k[/I] values, you will want to remove [I]n[/I] values that possess an algebraic factorization.
This is an extra step -- best performed between running srsieve and sr{1|2}sieve.

E.g., here,[I] k[/I]=8, and for n=3m, 8*1025^n - 1 = (2*1025^m)^3 - 1 which has a factor 2*1025^m - 1.
So, you would want to remove all n divisible by 3. Except for this base, they are already removed -- [URL="http://factordb.com/index.php?query=8*1025%5En-1"]by small primes 3 and 7[/URL]. This is not true in general case.

For removing, you can use awk, perl or python, take your pick.

rogue 2014-12-04 22:51

[QUOTE=Batalov;389203]You run srsieve to a low limit like you wrote. Then you run sr1sieve to a much higher limit (the rule of thumb is to know how long the PFGW/LLR test will take in your future range and run sr1sieve until it removes candidates approximately as fast).

When you have a collection of[I] k[/I]'s for the particular conjecture, you do many srsieve steps (or just one - using pl_remain.txt like you used input.txt), then combine all files, and run sr2sieve on them altogether as above.

For [U]a few[/U] special[I] k[/I] values, you will want to remove [I]n[/I] values that possess an algebraic factorization.
This is an extra step -- best performed between running srsieve and sr{1|2}sieve.

E.g., here,[I] k[/I]=8, and for n=3m, 8*1025^n - 1 = (2*1025^m)^3 - 1 which has a factor 2*1025^m - 1.
So, you would want to remove all n divisible by 3. Except for this base, they are already removed -- [URL="http://factordb.com/index.php?query=8*1025%5En-1"]by small primes 3 and 7[/URL]. This is not true in general case.

For removing, you can use awk, perl or python, take your pick.[/QUOTE]

srsieve already removes algebraic factorizations. It shows them to you when you start sieving a new range.

wombatman 2014-12-04 23:39

[QUOTE=Batalov;389203]You run srsieve to a low limit like you wrote. Then you run sr1sieve to a much higher limit (the rule of thumb is to know how long the PFGW/LLR test will take in your future range and run sr1sieve until it removes candidates approximately as fast).

When you have a collection of[I] k[/I]'s for the particular conjecture, you do many srsieve steps (or just one - using pl_remain.txt like you used input.txt), then combine all files, and run sr2sieve on them altogether as above.

For [U]a few[/U] special[I] k[/I] values, you will want to remove [I]n[/I] values that possess an algebraic factorization.
This is an extra step -- best performed between running srsieve and sr{1|2}sieve.

E.g., here,[I] k[/I]=8, and for n=3m, 8*1025^n - 1 = (2*1025^m)^3 - 1 which has a factor 2*1025^m - 1.
So, you would want to remove all n divisible by 3. Except for this base, they are already removed -- [URL="http://factordb.com/index.php?query=8*1025%5En-1"]by small primes 3 and 7[/URL]. This is not true in general case.

For removing, you can use awk, perl or python, take your pick.[/QUOTE]

Thanks!

Batalov 2014-12-04 23:48

[QUOTE=rogue;389213]srsieve already removes algebraic factorizations. It shows them to you when you start sieving a new range.[/QUOTE]
Maybe it does. Maybe it didn't before: see [URL="http://www.noprimeleftbehind.net/crus/sieve-sierp-base86-250K-1M.txt"]this file for example[/URL] (590 values of n are divisible by 3, e.g. n = 253515)

It is best to always check, especially other people's files.
___________________

[U]Additional notes about Aurifeuillean factors[/U]: (4*c[SUP]4[/SUP]) * b[SUP]n[/SUP] +1 where 4|n.
1. Aurifeuillean factors for k=4, 64, 324, 1024 (most commonly encountered) are luckily eliminated by small prime 5, [U]except for[/U] b which are divisible by 5.
2. Point in case - Sierp S155. It has only one k left, k=4. If we sieve with srsieve, we will get some n==2 (mod 4) and some n==0 (mod 4) in the result file. The latter of course should be eliminated completely or much computation will be wasted. It is probably best if I reserve this base now, to run it properly.

Batalov 2014-12-05 01:22

Another interesting case. S100, k=64. Note that 100 is a square (therefore for even n, 100^n will be the 4th power). All even n's should be removed from the sieve file for k=64, [URL="http://www.noprimeleftbehind.net/crus/sieve-sierp-base100-250K-1M.txt"]but they aren't[/URL].

A note for the more curious reader: why are the corresponding values already relatively depleted? [SPOILER]That is because the candidates (k*b^n+c) are a product of two much smaller numbers, and smaller numbers have accordingly 1) more small factors than another random number of the original size, and 2) that is times two (for the two algebraic factors). So, these are eliminated at a better than average rate; yet, each of the survivors is still obviously composite and will be nevertheless subject to a, say, hour long prime test. Bottom line: they should be removed.[/SPOILER]

MyDogBuster 2014-12-05 14:23

[QUOTE]srsieve -a -n 25e3 -N100e3 -P 1e6 -m 4e9 input.txt[/QUOTE]If you use the above parms, you will wind up with tests in the 25000 to 100000 range. That's already been tested on this base.

If you are sieving 1M to 3M then -n has to be 1e6 and -N 3e6

wombatman 2014-12-05 15:39

Thanks for the clarification. I was just making sure I understand the basics first before I went headlong into it. I've also responded to your message with a question.

LaurV 2014-12-26 19:47

My sr2sieve does not seem to generate an output file, the same way as srsieve and sr1sieve do, but only a file with factors. Am I missing a switch/option in the command line, or am I supposed to parse-out the factors from the input file "by hand"? (or can I use the srfile in a tricky manner to remove the factors without removing the full "n", as when a prime is found?) (or do I give the factors file as a parameter to another tool, like cllr??).

(right now I did a small excel macro to parse the factors out, which is not extremely fast, but did the job well, in "visual mode" hehe, I was playing with Riesel base 607, without reservation, and I did my own tools to play with small "n"s and with the files resulted from sieving and starting the base, I took it to n=7k already, there are 716 "k"s left, I do this just to understand how the tools work, but if the result is of any value, I will send it out. For my "legal" reservations - which are base 972 and base 967 - there is no need to use sr2sieve, as one has only 1 "k", the other I took the sieved file from the web, and is sieved high enough, faster to go with llr/pfgw).

Puzzle-Peter 2014-12-26 20:44

[QUOTE=LaurV;391024]My sr2sieve does not seem to generate an output file, the same way as srsieve and sr1sieve do, but only a file with factors. Am I missing a switch/option in the command line, or am I supposed to parse-out the factors from the input file "by hand"? (or can I use the srfile in a tricky manner to remove the factors without removing the full "n", as when a prime is found?) (or do I give the factors file as a parameter to another tool, like cllr??).
[/QUOTE]

It's srfile -k<known-factor-file> -[g/w/p/whatever] <sievefile>

Don't worry about multiple factors for one candidate and stuff like that as it will take care of all these issues and even check if the factor really devides the candidate.

LaurV 2014-12-27 03:50

Thanks a billion. I really missed the -k switch of srfile! :blush: (or, well, I didn't really understand the explanation given when -h).

LaurV 2015-03-25 07:32

I went kookoo, giving replies to myself...
Another stupid question: my srXsieve seems to cont the "seconds per factor" from the beginning of sieving, which is wrong, as the sieving was much faster in the beginning and it is much slower now. I am [U]sure[/U] I read somewhere something related to "the last 30 factors found", however I can't remember where and about which program (it may be a different program, and/or my perception may be subjective, however the siever spits a line with a factor every ~5 seconds, and it still says that it is finding a factor per second). What I am missing?

(I think this is sr1sieve 1.4.5)

rogue 2015-03-25 14:34

[QUOTE=LaurV;398580]I went kookoo, giving replies to myself...
Another stupid question: my srXsieve seems to cont the "seconds per factor" from the beginning of sieving, which is wrong, as the sieving was much faster in the beginning and it is much slower now. I am [U]sure[/U] I read somewhere something related to "the last 30 factors found", however I can't remember where and about which program (it may be a different program, and/or my perception may be subjective, however the siever spits a line with a factor every ~5 seconds, and it still says that it is finding a factor per second). What I am missing?

(I think this is sr1sieve 1.4.5)[/QUOTE]

I think that is sr2sieve. I don't think I modified sr1sieve to do that although I haven't looked at the code.

LaurV 2015-03-28 08:34

Another stupid question: looking to [URL="http://www.noprimeleftbehind.net/crus/Riesel-conjecture-base2-reserve.htm"]this list[/URL], I see two [U]even[/U] numbers inside. Any special reason of that? I assumed is something I don't know, and looked for those numbers on "sister projects" sites, they don't have them, and people are talking only about "50 k's", not 52.

gd_barnes 2015-03-28 08:54

[QUOTE=LaurV;398811]Another stupid question: looking to [URL="http://www.noprimeleftbehind.net/crus/Riesel-conjecture-base2-reserve.htm"]this list[/URL], I see two [U]even[/U] numbers inside. Any special reason of that? I assumed is something I don't know, and looked for those numbers on "sister projects" sites, they don't have them, and people are talking only about "50 k's", not 52.[/QUOTE]

All bases on this project include k's that are multiples of the base (MOB). When the Riesel and Sierp base 2 conjectures were first conjectured by Mr. Riesel and Mr. Sierpinski over 60 years ago they only specified that odd k's be considered not thinking that 2k*2^n-1 could have a different prime than k*2^n-1 if the latter had a prime at n=1 (since n>=1 for all k's). If the latter does not have a prime at n=1 then the prime for k and 2k will be the same and this project does not test 2k. For example the testing for k=90646 would be the same as for k=181292 for Sierp base 2 so we do not test k=181292 but if k=90646 had a prime at n=1 than we would test k=181292. In a short discussion with Prof. Caldwell at the top-5000 site not long after this project started he concurred that multiples of the base should be included. He even asked me if our project had tested all even k's on the Sierp base 2 side for k<78557, i.e. the "1st" conjecture. (We had already and there were no even k's remaining.) At the same time that this project was started Prof. Caldwell was in the process of publishing a paper about the Sierp conjectures up to base 100 where he included them.

We believe that it is incorrect to not include multiples of the base in the conjectures since they can yield different primes than k divided by the base. (The starting bases script take it all into account.) I have chosen not to press the issue with PrimeGrid on their Riesel project and their Extended Sierp project partly because I thought it was an interesting excercise for us to do the few k's remaining and partly because I did not want to get into a debate about what the mathematicians had conjectured over 60 years ago.

Edit: If you would like to help in testing these k's along with other base 2/4 k's that we have searched to n=~2.9M base 2, I could use some help on PRPnet port 1400. See details at [URL]http://www.mersenneforum.org/showpost.php?p=382081&postcount=1[/URL]. It would be nice to get them pushed further ahead faster since all other base 2 k's have been searched much further.

LaurV 2015-03-28 09:29

Thanks for the prompt reply. It (somehow) makes sense. I will have a look to your link right now.

KEP 2015-03-28 13:39

[QUOTE=gd_barnes;398813]Edit: If you would like to help in testing these k's along with other base 2/4 k's that we have searched to n=~2.9M base 2, I could use some help on PRPnet port 1400. See details at [URL]http://www.mersenneforum.org/showpost.php?p=382081&postcount=1[/URL]. It would be nice to get them pushed further ahead faster since all other base 2 k's have been searched much further.[/QUOTE]

Don't worry my friend. I have just begun a collaboration with Reb about using SRBase to complete the k=3677878 for R3 to n=2M, wich he can do in about 15 days where I would use at least 100 days. So in about 2-3 weeks, I'll move my Sandy Bridge to your aid and in about 100 days, I'll move my offline Hasswell to enhance the sieving for the current SR base 2/4 search. But even if I decide to chime in, we do still need a lotof ressources to really move somewhere, so please LaurV, if you can spare some ressources and join the effort on port 1400, we could actually start seeing some serious progress. But as stated, as soon as R3, k=3677878 is completed to n=2M, I'm moving more and more ressources to this effort running on port 1400.

Puzzle-Peter 2015-05-30 20:11

I seem to remember people sieving Riesel and Sierpinski candidates for the same base in one file. How do you do that? I am looking at S/R 22 which are both at n=1M now.

gd_barnes 2015-05-30 21:02

[QUOTE=Puzzle-Peter;403254]I seem to remember people sieving Riesel and Sierpinski candidates for the same base in one file. How do you do that? I am looking at S/R 22 which are both at n=1M now.[/QUOTE]

The same way that you do k's remaining for a base on one side. Use srsieve with the -a switch and sieve to a nominal depth. Reference a k's remaining file that contains:
5128*22^n+1
3656*22^n-1

Then use sr2sieve to continue sieving the output file from the above. With one k on each side, base 22 is a great base to do this on.

Puzzle-Peter 2015-05-31 06:34

Haha this is so easy I didn't even think it would work... thanks!

Batalov 2015-07-06 10:12

1 Attachment(s)
[QUOTE=Batalov;389234]Another interesting case. S100, k=64. Note that 100 is a square (therefore for even n, 100^n will be the 4th power). All even n's should be removed from the sieve file for k=64, [URL="http://www.noprimeleftbehind.net/crus/sieve-sierp-base100-250K-1M.txt"]but they aren't[/URL].[/QUOTE]
I attached here a more generalized version of the script that should help to remove algebraically composite candidates from CRUS-related ABCD, NPG or PFGW files. The instructions are [URL="http://mersenneforum.org/showpost.php?p=405185&postcount=61"]the same as for the Riesel sieves[/URL] (except, obviously, when the algebraic file is empty; then nothing needs to be done). Anyone can try this script on their own local workfiles, and Gary could run the script on all server-hosted sieve files.

[I]Disclaimer[/I]: this script is only slightly adapted from the base=2 (which is obviously squarefree) to a general base version and as such is not taking into account special k <-> b interactions when the base is not squarefree (for these, use a sheet of paper, pencil and some thought; plus, the [B]isPower [/B]script is somewhere here, on the forum); however, a lot of differences of squares, and sum and diffs of odd powers will be removed (e.g. try S618 or R79 or R88). And only the simplest Aurifeuillian will be removed from the Sierp series (most likely it will only happen when k=2500: 50*b[SUP]2m[/SUP]-+10*b[SUP]m[/SUP]+1 | 2500*b[SUP]4m[/SUP]+1; all other Aurifeuillians have a covering set of tiny factors).

Puzzle-Peter 2016-06-02 19:41

What's the latest version of srsieve? I could find source code for 1.0.7 but the latest executable I found is 0.6.17. Is there a more recent executable for 64 bit linux somewhere?

rogue 2016-06-02 20:13

[QUOTE=Puzzle-Peter;435398]What's the latest version of srsieve? I could find source code for 1.0.7 but the latest executable I found is 0.6.17. Is there a more recent executable for 64 bit linux somewhere?[/QUOTE]

Possibly. Are you able to build your own for Linux?

Puzzle-Peter 2016-06-03 13:13

[QUOTE=rogue;435403]Possibly. Are you able to build your own for Linux?[/QUOTE]

When it's not too complicated. I have gcc44 available.

VBCurtis 2017-01-14 01:03

What's the flag to put into llr.ini to tell llr to ignore a k-value after a prime is found? It's not in the first post of this thread (that post still instructs us to use pfgw always, which is not fast, right?), and I'm too lazy to read the entire thread to find the fix.

Something like stoponprimedk=1?

pepi37 2017-01-14 08:27

[QUOTE=VBCurtis;450917]What's the flag to put into llr.ini to tell llr to ignore a k-value after a prime is found? It's not in the first post of this thread (that post still instructs us to use pfgw always, which is not fast, right?), and I'm too lazy to read the entire thread to find the fix.

Something like stoponprimedk=1?[/QUOTE]

StopOnPrimedK=1

LaurV 2017-05-25 16:59

I am sieving 2*1909^n-1 100k<n<=200k, with both sr1sieve and sr2sieve starting from a pre-sieved file (done with srsieve to 1e9 or so) and I am getting different lists of factors. Am I doing something stupid, or I just uncovered a bug in sr1sieve that seems to miss many factors? Can anybody reproduce this?

(edit: you only need to sieve one minutes or so, the first difference appears at 1568702129 | 2*1909^169938-1, for which sr2sieve agrees, but sr1sieve disagrees)

MisterBitcoin 2017-05-25 17:42

[QUOTE=LaurV;459716]I am sieving 2*1909^n-1 100k<n<=200k, with both sr1sieve and sr2sieve starting from a pre-sieved file (done with srsieve to 1e9 or so) and I am getting different lists of factors. Am I doing something stupid, or I just uncovered a bug in sr1sieve that seems to miss many factors? Can anybody reproduce this?

(edit: you only need to sieve one minutes or so, the first difference appears at 1568702129 | 2*1909^169938-1, for which sr2sieve agrees, but sr1sieve disagrees)[/QUOTE]

sr1sieve gave me the following factor for your candidate.
[CODE]1031811247 | 2*1909^169938-1[/CODE]sr2sieve gave me the same result.

Booth factor files have the same factor count and found the same factors. (Tested from 1e9 up to 5e9)

Which versions from sr1sieve/sr2sieve are you using, can you upload your factor files?

rogue 2017-05-25 18:11

The latest versions, sr1sieve 1.4.5 and sr2sieve 1.9.3 agree on factors on Win64. I have not made any code codes in either in over four years. If you are using those releases, but on a different OS, please let me know.

LaurV 2017-05-26 00:38

1 Attachment(s)
[QUOTE=rogue;459722]The latest versions, sr1sieve 1.4.5 and sr2sieve 1.9.3 agree on factors on Win64. I have not made any code codes in either in over four years. If you are using those releases, but on a different OS, please let me know.[/QUOTE]
I am using last versions, as indicated. The problem is reproducible on win7 64 bits, with an i7-6950x. Starting from a file with less than 11500 candidates remaining, generated with srsieve, I run in two different folders:

sr2sieve -P 5e12 -R 1500 -w -i sr_1909.pfgw

respectively

sr1sieve -P 5e12 -i sr_1909.pfgw -o t17_b1909.prp -f factors.out

The result has over 100 different lines, like this:
[ATTACH]16141[/ATTACH]

I will have to go to job today and see if the problem is reproducible on those computers there, and late tonight back home I will try the other computers, last night it was too late already.

pepi37 2017-05-26 08:37

[QUOTE=LaurV;459753]I am using last versions, as indicated. The problem is reproducible on win7 64 bits, with an i7-6950x. Starting from a file with less than 11500 candidates remaining, generated with srsieve, I run in two different folders:

sr2sieve -P 5e12 -R 1500 -w -i sr_1909.pfgw

respectively

sr1sieve -P 5e12 -i sr_1909.pfgw -o t17_b1909.prp -f factors.out

The result has over 100 different lines, like this:
[ATTACH]16141[/ATTACH]

I will have to go to job today and see if the problem is reproducible on those computers there, and late tonight back home I will try the other computers, last night it was too late already.[/QUOTE]

I do same initial sieve as you but my starting file has more then 11500 candidates left (11596) that is first difference.
Second using Sr1sieve i sr2sieve on Win7 x64 give me same output, same number of factors and same value as for MRBitcoinn ( 1031811247 | 2*1909^169938-1)
Third:
I use your command lines, and again got same results. same number of factors .

LaurV 2017-05-26 13:19

1 Attachment(s)
[QUOTE=pepi37;459765]I do same initial sieve as you but my starting file has more then 11500 candidates left (11596) that is first difference.
Second using Sr1sieve i sr2sieve on Win7 x64 give me same output, same number of factors and same value as for MRBitcoinn ( 1031811247 | 2*1909^169938-1)
Third:
I use your command lines, and again got same results. same number of factors .[/QUOTE]
Typo, less than 11600 (we get 11596 too). In hurry in the morning, to go to job.
More details:
- it only happenes on 6950X and [U]only[/U] when all 20 cores (HT) are sieving (there are 21 bases left and I was sieving all in the same time) with [U]sr2sieve[/U] (so the issue is with sr2sieve, not with sr1sieve). The additional factors appear when I sieve 20 or 21 of those bases.
- all additional factors that appear in the factor file are redundant (sometimes more than once), i.e. there exist lower factors for those candidates (see the example of MisterB above). Looks like somehow sr2sieve is saving all duplicates even if the candidate was eliminated.
- it is not related to temperature, the CPU is not stressed and it does not get hot (I can even run few more threads of cllr in the background with no speed/heat difference, but the problem will not occur with, say, 18 threads of sr2sieve and few threads of cllr)
- it is not related to the base, other bases behave exactly the same, see attached picture.

[ATTACH]16147[/ATTACH]

The reason I was using sr2sieve (instead of sr1sieve, as it woul be normal for a single base single k) is the -R switch which I don't have for sr1sieve. Otherwise I won't be stupid to plan willingly to run manual factor elimination, and I wouldn't use sr2sieve. The speeds of them both are identical. Why did I tried a double-check with sr1sieve for this base? (we talk about 1909). Well, long story short, after sieving to 5e12, all the other bases (2*b^n-1) have like 4k candidates left for 100k<n<200k, except this one which has 9k candidates left for cllr. I thought, what the hack, it may be a mistake, maybe I missed a zero or so, or it must be because there are 20 threads and 21 bases and maybe this one was left apart? Therefore I did it again only for this base, with sr1sieve. Well the result was the same, except for the ~100 less factors. But after eliminating the factors in both cases, I get the same final file for cllr in both cases, proving tht all were duplicates. It looks like this base has indeed much less amount of low factors than the other 20 bases.

But is still strange why sr2sieve behave in that way.

LaurV 2017-05-26 13:38

1 Attachment(s)
Because we anyhow totally hijacked this thread...
(I elliminated the macros and DDE part, you can click on the plus signs or on the little 1/2 tabs on the upper left, or see the second sheet too)

gd_barnes 2017-05-26 19:47

If you stop sr2sieve mid-stream and then resume it from its checkpoint file, it will not remember factors that it has found and so will find additional factors for the same term. That's just a quirk of the program. It doesn't hurt anything.

When hyperthreading, I believe that each core will not know when a different core has found a factor and so you will get more than one factor for the same term on different cores.

LaurV 2017-05-27 05:30

[QUOTE=gd_barnes;459811]If you stop sr2sieve mid-stream and then resume it from its checkpoint file, it will not remember factors that it has found and so will find additional factors for the same term. That's just a quirk of the program. It doesn't hurt anything.

When hyperthreading, I believe that each core will not know when a different core has found a factor and so you will get more than one factor for the same term on different cores.[/QUOTE]
The first part of your post may be the real reason, I may have been restarted the sr2sieve a couple of times during the tests and forgot to clear the checkpoint file. If that, than sorry for the false ararm. :redface:, and thanks for pointing it out.

However the second part of your post doesn't make sense. The instances of the process were separate (different bases, remember, so there were 21 tasks running, no shared memory, as the memory is process-bonded not core-bonded. If changing the core would cause the program losing data, for example every time when the afiinity changes, then [U]that[/U] would be a serious bug, not only in the program, but in the whole system itself (computer, hardware, software, all the concept).

gd_barnes 2017-05-27 07:12

You can ignore the second paragraph of my post. You're right. It makes no sense. I was thinking multi-core on one base...not hyperthreading one base on a single (or multiple) cores. If you're running multiple cores on one base, one core would not know of the factors found by the other cores. Anyway, oops. :-)

After reading through the thread I am near 100% sure that my first paragraph is what happened to you. But the remedy that you mentioned would not work when stopping sr2sieve mid-stream. It doesn't matter what you do to the checkpoint file, sr2sieve will not "remember" previous factors found when you restart it. While it is running it stores them in memory so you will get no duplicates if you let it run through to the end. When it finishes or you stop it mid-stream it clears out memory although the factors.txt file is still there. When restarted, the factors.txt file will then be appended to with potentially more than one factor for a term becaue sr2sieve will not know which factors have been previously found before it was stopped.

LaurV 2017-05-27 08:55

[QUOTE=gd_barnes;459840]But the remedy that you mentioned would not work when stopping sr2sieve mid-stream. It doesn't matter what you do to the checkpoint file, sr2sieve will not "remember" previous factors found when you restart it.[/QUOTE]
yes, but when you delete the ckpoint, it will start from scratch (as the input file is not changed unless I manually elliminate the factors with srfile). That is easy to see in the factor list: all factors repeat themselves from the beginning, not just one single lost factor somewhere (that happened to me too, and I manually edited the factor file to eliminate the duplicate bunch at the end, that is a different thing). We are good now. Sorry again for the false alarm.

sweety439 2017-05-30 15:50

1 Attachment(s)
[QUOTE=LaurV;459785]Because we anyhow totally hijacked this thread...
(I elliminated the macros and DDE part, you can click on the plus signs or on the little 1/2 tabs on the upper left, or see the second sheet too)[/QUOTE]

@LaurV:

Can you update a file like this? For all bases 2<=b<=2048.

Besides, what is your search limit? 100K? 200K? 500K? or 1M?

LaurV 2017-05-31 02:18

[QUOTE=sweety439;460060]@LaurV:

Can you update a file like this? For all bases 2<=b<=2048.

Besides, what is your search limit? 100K? 200K? 500K? or 1M?[/QUOTE]

Did you open the file I posted? Did you do any of the actions I said in the post that you can do? (i.e. clicking on the little 1/2 icons in the upper left corners, clicking on the +/- signs, etc?) Or did I just wasted my time to "clean" it from the macros and DDE stuff that brings the results into it, to make it "safe" for you to open? (no, I don't put the results manually into it). That file was posted mostly [U]for you[/U] (and partially for Gary). Look inside. What you need is just a mark all, copy, paste.

KEP 2017-05-31 15:31

[QUOTE=mdettweiler;460108]Yay! :w00t: Glad to see another one of those fall - it's been a while!

:lavalamp::chappy::groupwave::george:[/QUOTE]

:smile:

Yes, it has been a while. Kind of makes me hope that there could hide 1 more prime in the bulk of tests, for the remaining 9 k's I'm currently testing on the Riesel side to n=750K.

On a sidenote, the LLR 3.8.20 going live thread at primegrid, offers a solution to run LLR multithreaded on PRPnet. It could increase the testing throughput by a lot. I did by switching from 3 cores running a single thread to running 2 instances of 2 threads, increase my overall production by 70%/day and by converting my base 16 numbers to plain base 2 numbers, before starting up LLR, I did increase my productivity by an additional 25%/day (more or less), compared to testing the same numbers as base 16 numbers.

A bit off-topic, but nice to know for those who does want to increase their overall productivity for a lesser heat production also :wink:

Thanks for your gratulation :smile:

Take care.


All times are UTC. The time now is 03:29.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.