mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   Conjectures 'R Us (https://www.mersenneforum.org/forumdisplay.php?f=81)
-   -   Software/instructions/questions (https://www.mersenneforum.org/showthread.php?t=9742)

mdettweiler 2017-05-31 18:59

[QUOTE=KEP;460142]On a sidenote, the LLR 3.8.20 going live thread at primegrid, offers a solution to run LLR multithreaded on PRPnet. It could increase the testing throughput by a lot. I did by switching from 3 cores running a single thread to running 2 instances of 2 threads, increase my overall production by 70%/day and by converting my base 16 numbers to plain base 2 numbers, before starting up LLR, I did increase my productivity by an additional 25%/day (more or less), compared to testing the same numbers as base 16 numbers.

A bit off-topic, but nice to know for those who does want to increase their overall productivity for a lesser heat production also :wink:
[/QUOTE]
Is there a good "rule of thumb" for determining when it's better to use fewer instances with more threads, versus more instances with one thread each?

My "simple" understanding of this is that it's better to run one LLR on each core with one thread each, because there are plenty of separate candidates to keep all cores busy, and that way you have no losses due to the imperfect scaling of parallelizing a single test across multiple cores. Based on this, I always understood multithreaded LLR to be something primarily useful for when we get to "really huge" tests (i.e., GIMPS or SoB level) when it becomes more important to (e.g.) verify a single prime in the shortest amount of time than to maximize overall throughput.

But perhaps this understanding is outdated. Is it a matter of memory bandwidth that makes multithreading useful even for our (relatively) "small" tests? Or is it something to do with hyperthreading?

pepi37 2017-05-31 19:10

If you have "normal" Intel CPU with 6 or 8 MB L3 cache size, then start using t-2 swich only above 256-288K. T-2 can hold up to 480K or even 512K and then use T-3 up to 768K
So it is basically 256K per core or per 1.5MB of cache size.
But that is only me :)
When you take for example 320 K candidate per core on quad core CPU it goes to overhead, and it become to slow down. Prime95 support T-2 switch for long of time, and I use it. Now LLR can do the same.

mdettweiler 2017-06-01 18:07

OK, so if I'm understanding you correctly - the goal is to keep the working set of all LLRs (across all cores) within the L3 cache size of the CPU, right? And this scales (roughly) with the FFT size?

So...one of my computers is a dual-core Sandy Bridge with 4 MB of L3 cache. It is testing Riesel base 23 around n=1.4M, which is using a 560K FFT size. 560K * 2 = 1120K, which is well within the 4 MB of L3 cache. Does that mean I should keep using 2 separate threads instead of -t2?

Obviously the answer would be different for a quad-core, since it has has not much more cache and more cores to share it with.

Actually...should I be running *4* separate clients on this machine? 560K * 4 = 2240K, which still fits within L3 cache. The machine has 2 physical cores but 4 logical cores due to hyperthreading. I had always understood that LLR didn't gain much from hyperthreading, but since LLR performance is memory-bandwidth-intensive, perhaps there is some room to gain there...maybe I should try this.


(P.S.: Gary, you might want to move this exchange to the "Software/Instructions/Questions" thread, since it's a bit off-topic here...)

pepi37 2017-06-01 19:02

[QUOTE=mdettweiler;460246]So...one of my computers is a dual-core Sandy Bridge with 4 MB of L3 cache. It is testing Riesel base 23 around n=1.4M, which is using a 560K FFT size. 560K * 2 = 1120K, which is well within the 4 MB of L3 cache. Does that mean I should keep using 2 separate threads instead of -t2? [/QUOTE]

So your candidate has 560K and you are on dual core machine, so one core has 2 MB of L3 cache .
Lets do math: 288K is 1 MB, 576K is 2MB and since you have 4MB and two cores, answer is: you should use -t2


[QUOTE=mdettweiler;460246]Actually...should I be running *4* separate clients on this machine? 560K * 4 = 2240K, which still fits within L3 cache. The machine has 2 physical cores but 4 logical cores due to hyperthreading. I had always understood that LLR didn't gain much from hyperthreading, but since LLR performance is memory-bandwidth-intensive, perhaps there is some room to gain there...maybe I should try this. [/QUOTE]

For me , and for many on this forum there is no HT, this is fiction. And fiction and reality doesnot match at all.
If you use 4 candidates in parallel then
288 K =1 M, 576 K = 2M
So you should need to have 8 MB of cache and you only have 4 MB of L3 cache

VBCurtis 2017-06-02 00:57

[QUOTE=pepi37;460250]
288 K =1 M, 576 K = 2M [/QUOTE]

From where do you get these numbers? I get that an iteration moves more data than the exact FFT size, but I'd like a more definitive source if you know of one.

pepi37 2017-06-02 06:35

[QUOTE=VBCurtis;460300]From where do you get these numbers? I get that an iteration moves more data than the exact FFT size, but I'd like a more definitive source if you know of one.[/QUOTE]
As I say before in this post " it is my observation"
Many users here will say: put one candidate per core and run it. It is optimal. I will say it is all but optimal.
Also there was post here or on Primegrid
[QUOTE]Multiply FFT size by 8 to get it in bytes e.g. 128k FFT size = 1024kB. That would be 256k for 2MB, 512k for 4MB, 768k for 6MB, 1024k for 8MB. I would caution that's only for the FFT, and I don't know if there is much else needed for other things, so beware if you're at a limit. As an observation it seems to hold ok.[/QUOTE]

KEP 2017-06-02 12:17

[QUOTE=mdettweiler;460246]Actually...should I be running *4* separate clients on this machine? 560K * 4 = 2240K, which still fits within L3 cache. The machine has 2 physical cores but 4 logical cores due to hyperthreading. I had always understood that LLR didn't gain much from hyperthreading, but since LLR performance is memory-bandwidth-intensive, perhaps there is some room to gain there...maybe I should try this.
[/QUOTE]

Well, you should definently not run more clients than you have cores :smile:

With LLR 3.8.20, thanks to Batalov, LLR became multithreaded, wich in reality means, that we ventured into a whole new area of unknows. What is know, is that most computers gain, from using more than 1 core per client, if the FFT length is large enough. What appears to make the big difference is that most of our clients still suffers bottlenecks, while the CPU waits for RAM to catch up. This bottleneck is severely reduced by running more cores per client.

What works best at your test level on your machine, you have to try and figure out, by timing LLR. But most likely, you are loosing performance, if your are running one core per client. I can give you an example from my Sandy Bridge, it tested base 16 number at around 3999 sec/test at n=2.4M (base 2), now for the same k at n=2517108 it test on 2 cores at around 1899 seconds. This gives a difference between (pre-multithreading) most productive testing scheme and current testing scheme that looks like this:

3 clients running 1 core doing 3*86400s=259200CPU-seconds / 3999 sec/test = 64.82 tests/day
2 clients running 2 cores each doing 2*86400s=172800CPU-seconds / 1899 sec/test = 91,00 test/day

So as you can see, even though I'm currently testing an n-value 5% larger than the one completing on a single core, I'm doing 40.1% more work a day (in count of completed tests)... if you count the amount of completed bits, I'll have an even higher productivity gain :smile:

But in order for you to find out wether or not you gain from multithreading or not, relies completely on testing done locally on your own system.

Take care

KEP

Ps. The line in llr.ini you need to add is as follows: ThreadsPerTest= :smile:

mdettweiler 2017-06-03 02:31

Got it. Probably worth trying, then, since (per pepi37's 8x rule for determining the working set of a given FFT) a 560K FFT x 8 = 4480 kB working set per core i.e. 8960 kB for two cores. Clearly much larger than my 4 MB L3 cache; even 1 core would still be larger, but perhaps less memory bandwidth pressure would still be a good thing.

I'll try this as soon as I get the chance - thanks for all the info! :smile:

mdettweiler 2017-06-03 19:22

[QUOTE=mdettweiler;460384]Got it. Probably worth trying, then, since (per pepi37's 8x rule for determining the working set of a given FFT) a 560K FFT x 8 = 4480 kB working set per core i.e. 8960 kB for two cores. Clearly much larger than my 4 MB L3 cache; even 1 core would still be larger, but perhaps less memory bandwidth pressure would still be a good thing.

I'll try this as soon as I get the chance - thanks for all the info! :smile:[/QUOTE]
Just to follow up on this: I tried running with -t2 overnight and I've gotten [i]at least[/i] an 8% improvement in overall efficiency. Nice!

I say "at least" because normal variation in test times due to background processes, etc. makes it difficult to get an exact figure. I took a conservative estimate by taking the longest -t2 test time I observed, multiplying it by 2, and comparing that with the shortest -t1 test time I had on record from the last few days. So I'm probably getting more than 8% improvement on the whole.

It's tough to get a more accurate measurement on this computer because it's also used for "real work" and doesn't run PRPnet continuously (just when I'm not using it). Hopefully I'll be able to get some more accurate numbers from some of my other boxes that crunch (sort of) full time.

mdettweiler 2017-06-05 07:33

After another day's worth of running with -t2, I have a better sample size of test timings to work with and it looks like I am getting a solid [b]16% reduction in average test times[/b] (averaged over the last 5 tests with -t2 compared with 5 tests on one of two single-threaded clients, normalized by multiplying the -t2 average time by 2).

Since even one client is still too big to fit in my 4 MB L3 cache (the x8 rule says that a 560K FFT = 4480 kB memory working set), it appears that this benefit is [I]purely[/I] from reducing pressure on the memory bus. There should be a [i]lot[/i] more to be gained for tests small enough to fit entirely within the cache when appropriately multithreaded. And, the benefit would be even greater for newer CPUs which are more prone to outrun their memory buses (my Sandy Bridge is relatively old at this point). Given this, I can totally see where those 40% and 70% productivity increases KEP cites are coming from!

Thanks guys for pointing this out to me - I can see why everyone's talking about it as a big revolution! :smile:


(And yes, this exchange should [b]definitely[/b] be moved to the "Software/Instructions/Questions" thread....)

KEP 2017-06-05 13:18

[QUOTE=mdettweiler;460552]Thanks guys for pointing this out to me - I can see why everyone's talking about it as a big revolution! :smile:[/QUOTE]

Your welcome :smile:

wikimax 2017-06-28 23:51

R247 reservation
 
Dear Mr. Barnes!

Is it possible to reserve the following interval:

Riesel
b=247
k=1 to 469184 (all k, i want to start a new range)
n=1 to 2^12=4096

I have only one PC and I want to check this interval with the program “Mathematica”. Should I send you the results in this forum? I want to transfer the results in an Excel-File for better reading. Or do you prefer another program (not Mathematica)? If yes, how can I start ist?

With regards!

ET_ 2017-06-29 09:58

[QUOTE=wikimax;462327]Dear Mr. Barnes!

Is it possible to reserve the following interval:

Riesel
b=247
k=1 to 469184 (all k, i want to start a new range)
n=1 to 2^12=4096

I have only one PC and I want to check this interval with the program “Mathematica”. Should I send you the results in this forum? I want to transfer the results in an Excel-File for better reading. Or do you prefer another program (not Mathematica)? If yes, how can I start ist?

With regards![/QUOTE]

Can't you use a siever and either LLR or pfgw programs for such search?

Luigi

rogue 2017-06-29 13:38

[QUOTE=wikimax;462327]Dear Mr. Barnes!

Is it possible to reserve the following interval:

Riesel
b=247
k=1 to 469184 (all k, i want to start a new range)
n=1 to 2^12=4096

I have only one PC and I want to check this interval with the program “Mathematica”. Should I send you the results in this forum? I want to transfer the results in an Excel-File for better reading. Or do you prefer another program (not Mathematica)? If yes, how can I start ist?[/QUOTE]

Use srbsieve. Read [URL="http://www.mersenneforum.org/showthread.php?t=20357"]this thread[/URL] to learn how to use it. You will also need newpgen, srsieve, and pfgw with srbsieve. Trust me in that it will save you many weeks, if not months, of time.

Also, you need to reserve n to 10,000 in the minimum, but preferably to 25,000.

gd_barnes 2017-06-30 09:31

Wikimax,

We cannot accept reservations for bases to n<10000. You will need to read our software thread. As discussed by others, you will need to use the appropriate software to test the bases. Mathematica will be very inefficient for doing these searches.


Gary

rogue 2018-04-16 13:07

[QUOTE=germanNinja;485442]I am confident I will maintain interest. Right after I made my first post yesterday, I downloaded everything I needed, researched how to split the workload across computers, and started crunching on two of my three computers. My plan for this morning was to split it up to my third computer as well. I am no stranger to long tasks -- I have done a small amount of GIMPS crunching. While I have not done anything that takes around a year, I have done GIMPS tasks that take well over a month.

If you will not reserve S606 for me, that's fine -- I'll look into your suggestions, more GIMPS work, or other BOINC projects. However, I ask that you allow me to reserve it. I'll send you progress reports as often as you want as proof that I'm sticking with it. One if my flaws is being stubborn :)[/QUOTE]

Will you be using PRPNet to distribute the work across the various computers and cores?

germanNinja 2018-04-16 13:18

No. I used srfile to split up the work by k, and again to combine multiple k's together in a sieve file. If one computer runs out of work, it is fairly simple for me to split the largest file again. The computer I bring to school has to go through the school's proxy which blocks PRPNET, at least for primegrid.

pepi37 2018-04-16 15:07

[QUOTE=rogue;485444]Will you be using PRPNet to distribute the work across the various computers and cores?[/QUOTE]

Rogue this idea was always on my mind, but reading , and setup PRPNet is definitely is not easy task , especially if you dont have knowledge for that kind of work. So I take piece or work and split in XLS table :) It is faster :)

rogue 2018-04-16 15:47

[QUOTE=pepi37;485453]Rogue this idea was always on my mind, but reading , and setup PRPNet is definitely is not easy task , especially if you dont have knowledge for that kind of work. So I take piece or work and split in XLS table :) It is faster :)[/QUOTE]

What is difficult with it? Installing a database is the hardest part. PM me with questions.

rogue 2018-04-16 15:49

[QUOTE=germanNinja;485445]No. I used srfile to split up the work by k, and again to combine multiple k's together in a sieve file. If one computer runs out of work, it is fairly simple for me to split the largest file again. The computer I bring to school has to go through the school's proxy which blocks PRPNET, at least for primegrid.[/QUOTE]

It probably won't block if within the network, but there is only one way to find out.

unconnected 2020-02-27 10:54

BTW, what is the right options to test CRUS bases on PRPNet? I'd tried to use servertype=1 and servertype=3, in both cases search continues despite of fact that prime for specific k has been already found. I remember in early versions of PRPNet (4.x maybe) there was 'sierpinskiriesel=1' option that prevented such behavior. Any suggestions?

rogue 2020-02-27 14:00

[QUOTE=unconnected;538434]BTW, what is the right options to test CRUS bases on PRPNet? I'd tried to use servertype=1 and servertype=3, in both cases search continues despite of fact that prime for specific k has been already found. I remember in early versions of PRPNet (4.x maybe) there was 'sierpinskiriesel=1' option that prevented such behavior. Any suggestions?[/QUOTE]

servertype=1

If you have more k then clients, then you might also want:

onekperinstance=1

If you e-mail me your prpserver.log I can take a look at what is happening.

gd_barnes 2020-03-19 04:16

Can someone post the latest Windows version of srsieve2 here along with a README or other help file that shows how to use it ?

I would like to do some sieving on many k's for Riesel base 3. All are k > 25G.

Thank you.

KEP 2020-03-19 11:54

[QUOTE=gd_barnes;540134]Can someone post the latest Windows version of srsieve2 here along with a README or other help file that shows how to use it ?

I would like to do some sieving on many k's for Riesel base 3. All are k > 25G.

Thank you.[/QUOTE]

1. Download mtsieve here: [url]https://sourceforge.net/projects/mtsieve/files/[/url] (left click the file and choose open or save when asked by the browser)

2. Unzip or open folder and copy srsieve2 to your working folder

3. open cmd prompt as administrator or directly from the working folder using shift+right click on mouse and choose "open commandwindow"

4. type following in the new commandwindow:

srsieve2 -P (max sieve value) -w 10000000 (can be excluded but I have found on my quad that I'll run faster if I set w(orksize) to 10M primes compared to 1M primes as default per thread) -W (number of threads you want srsieve2 to use, it makes no sence to not set it to the amount of available threads) -n (min n value) -N (Max n value) -s (name of sequence file)

5. Hit enter and srsieve2 should start processing work at once :smile:

This will get you going from scratch :smile:

In case of problems, throw me a PM and I'll try to help you :smile:

Take care and stay safe.

If resume is nescesarily at any point, I think this will work:

srsieve2 -p (min sieve value to start from - may not be nescessary if using all threads) -P (max sieve value) -w 10000000 (can be excluded but I have found on my quad that I'll run faster if I set w(orksize) to 10M primes compared to 1M primes as default per thread) -W (number of threads you want srsieve2 to use, it makes no sence to not set it to the amount of available threads) -n (min n value) -N (Max n value) -i (name of output file created in point 5 or file created by srsieve or srfile that needs further sieving)

rogue 2020-03-19 13:16

srsieve2 cannot read some formats output by srsieve/srfile. You will want output from those programs to be in "pfgw" format so that srsieve2 can read them.

Note that srsieve2 does not support Legendre tables yet. I should have some time to work on that in the near future.

Of course any suggestions for improvements are welcome.

gd_barnes 2020-03-20 06:42

Thank you Kenneth and Mark! :smile:

MyDogBuster 2020-07-01 21:16

I'm currently running 4.3.5 for PRPNET. I want to upgrade to something newer but I keep getting parsing errors between the client and server. What I need s a stable windows
server and client. I need executables not instructions to do a make. I'm 74 and I'm done fiddling with makes.

rogue 2020-07-01 23:28

[QUOTE=MyDogBuster;549563]I'm currently running 4.3.5 for PRPNET. I want to upgrade to something newer but I keep getting parsing errors between the client and server. What I need s a stable windows
server and client. I need executables not instructions to do a make. I'm 74 and I'm done fiddling with makes.[/QUOTE]

I posted prpnet 5.4.4 Windows exes over at source forge. If you are upgrading a database instead of creating a new one, you will need to run this DDL to fix tables:

alter table Candidate drop column HasSierpinskiRieselPrime;
alter table CandidateGroupStats drop column SierpinskiRieselPrime;
alter table CandidateGroupStats add column SierpinskiRieselPrimeN int default 0;

MyDogBuster 2020-07-02 04:24

Thanks Mark

MyDogBuster 2020-07-07 18:28

Does someone have a source folder from 5.4.4 The download from source forge didn't have one or I can't find it. I seem to have a corrupted create tables file and mysql will
create the table but it then gives me a bad database name when I try to open the database. TIA

rebirther 2020-07-07 18:40

[QUOTE=MyDogBuster;549963]Does someone have a source folder from 5.4.4 The download from source forge didn't have one or I can't find it. I seem to have a corrupted create tables file and mysql will
create the table but it then gives me a bad database name when I try to open the database. TIA[/QUOTE]


[url]https://sourceforge.net/p/prpnet/code/93/tree/[/url] -->download snapshot

MyDogBuster 2020-07-07 22:17

Thanks Reb, Right in front of me in plain sight. Kinda like my hands and feet. I lose those from time to time also.. It's hell getting old. I'll turn 75 in a couple of weeks and I hope I don't misplace the birthday cake, LOL

gd_barnes 2020-07-24 23:12

The discussion about changes to srsieve2 have been moved to a new thread here:
[url]https://mersenneforum.org/showthread.php?t=25773[/url]

MyDogBuster 2020-08-06 15:16

Is there a prpadmin program out there that doesn't have the duplicate check in it. I want to load some rather large files. A run switch - something.

rogue 2020-08-06 16:23

[QUOTE=MyDogBuster;552779]Is there a prpadmin program out there that doesn't have the duplicate check in it. I want to load some rather large files. A run switch - something.[/QUOTE]

The duplicate check is done in the server. How large a file are you trying to load?

MyDogBuster 2020-08-06 19:45

5.5M tests

rogue 2020-08-06 21:22

[QUOTE=MyDogBuster;552792]5.5M tests[/QUOTE]

I have loaded about 2 million tests. It takes about 40 minutes.

If possible I suggest that you break it up into smaller files (maybe 500,000 each), each with a different set of k. This will allow you to start working on candidates while others are loading.

MyDogBuster 2020-08-06 21:53

[QUOTE]If possible I suggest that you break it up into smaller files (maybe 500,000 each), each with a different set of k. This will allow you to start working on candidates while others are loading.[/QUOTE]

That's the exact thing I'm trying to avoid. I just turned 75 on Monday and can't see too well and don't move around too much. A straight forward one time load of files without having to bust them up would be ideal.

rogue 2020-08-07 02:38

[QUOTE=MyDogBuster;552801][QUOTE]If possible I suggest that you break it up into smaller files (maybe 500,000 each), each with a different set of k. This will allow you to start working on candidates while others are loading.[/QUOTE]

That's the exact thing I'm trying to avoid. I just turned 75 on Monday and can't see too well and don't move around too much. A straight forward one time load of files without having to bust them up would be ideal.[/QUOTE]

You can load that many, but I recommend that if the server is empty that you wait until loading is done before you point clients to it.

MyDogBuster 2020-08-07 03:44

Thanks Mark. Will try to load them all.

MisterBitcoin 2021-01-13 11:59

For now using srbsieve up to n=3000; after that i will use a script using srsieve2 and cllr. In general i think srsieve2 should be implemented in srbsieve to get a speed boost.

VBCurtis 2021-01-13 16:21

Aren't the syntaxes the same for srsieve and srsieve2? Can't you just rename srsieve2 to srsieve and get what you want?

MisterBitcoin 2021-01-13 17:35

[QUOTE=VBCurtis;569174]Aren't the syntaxes the same for srsieve and srsieve2? Can't you just rename srsieve2 to srsieve and get what you want?[/QUOTE]


Sadly not. srsieve2 needs -s flag to import sequences from a file while srsieve just needs the name of a file to get the sequences loaded.

rogue 2021-01-13 18:22

[QUOTE=MisterBitcoin;569185]Sadly not. srsieve2 needs -s flag to import sequences from a file while srsieve just needs the name of a file to get the sequences loaded.[/QUOTE]

It isn't quite a "slam dunk". It should support srsieve, srsieve2, and srsieve2cl as possible sieving programs.

rogue 2021-01-13 18:49

1 Attachment(s)
You can try the following, but it will require change to your srbsieve.ini file. You need to change the phase lines. The first parameter should be the sieving program to use, "srsieve", "srsieve2", and "srsieve2cl" are the possible values but there isn't any explicit validation. So if you have:

phase=1000,100000,30000

it must now be:

phase=srsieve,1000,100000,30000

This will allow you to choose the program you want to use for each phase and play around with the configurations to determine to determine which works the best.

And no, I have not tested these changes.

MisterBitcoin 2021-01-13 19:13

[QUOTE=rogue;569192]You can try the following, but it will require change to your srbsieve.ini file. You need to change the phase lines. The first parameter should be the sieving program to use, "srsieve", "srsieve2", and "srsieve2cl" are the possible values but there isn't any explicit validation. So if you have:

phase=1000,100000,30000

it must now be:

phase=srsieve,1000,100000,30000

This will allow you to choose the program you want to use for each phase and play around with the configurations to determine to determine which works the best.

And no, I have not tested these changes.[/QUOTE]


[CODE]Phase 3: Processing k from 1132000000 to 1132052528
command: srsieve -q -m1e10 -P42000 -n145 -N192 -w sieve.in
srsieve2 v1.1, a program to find factors of k*b^n+c numbers for fixed b and variable k and n
srsieve: unknown option -- q
Fatal Error: srsieve: invalid option -m1e10[/CODE]

Those flags also have been changed so i dont think it will work. Tried out your mention, but i got

[CODE]Error: Could not process line phase=srsieve,144,33000,396000[/CODE]

rogue 2021-01-13 20:07

[QUOTE=MisterBitcoin;569195][CODE]Phase 3: Processing k from 1132000000 to 1132052528
command: srsieve -q -m1e10 -P42000 -n145 -N192 -w sieve.in
srsieve2 v1.1, a program to find factors of k*b^n+c numbers for fixed b and variable k and n
srsieve: unknown option -- q
Fatal Error: srsieve: invalid option -m1e10[/CODE]

Those flags also have been changed so i dont think it will work. Tried out your mention, but i got

[CODE]Error: Could not process line phase=srsieve,144,33000,396000[/CODE][/QUOTE]

You renamed srsieve2 to srsieve. Rename back to srsieve2 and update your ini file.

MisterBitcoin 2021-01-13 21:52

[QUOTE=rogue;569204]You renamed srsieve2 to srsieve. Rename back to srsieve2 and update your ini file.[/QUOTE]


Did that and still get the Error that he couldnt process the line.



[CODE]base=71
mink=1132000000
maxk=1132052528
c=-1
npgfile=1,1_.log
npgfile=2,2_.log
npgfile=3,3_.log
npgfile=4,4_.log
phase=srsieve,144,30000,300000
phase=srsieve,192,32981,42000
phase=srsieve,240,32976,225000[/CODE]


Thats the ini i am using, just for testing reasons.
(and yes i am using the srbsieve you posted)

rogue 2021-01-13 22:05

[QUOTE=MisterBitcoin;569218]Did that and still get the Error that he couldnt process the line.



[CODE]base=71
mink=1132000000
maxk=1132052528
c=-1
npgfile=1,1_.log
npgfile=2,2_.log
npgfile=3,3_.log
npgfile=4,4_.log
phase=srsieve,144,30000,300000
phase=srsieve,192,32981,42000
phase=srsieve,240,32976,225000[/CODE]

Thats the ini i am using, just for testing reasons.
(and yes i am using the srbsieve you posted)[/QUOTE]

The phase should be "srsieve2" if you are using srsieve2 and the exe should also be named srsieve2.

MisterBitcoin 2021-01-13 22:10

[QUOTE=rogue;569219]The phase should be "srsieve2" if you are using srsieve2 and the exe should also be named srsieve2.[/QUOTE]


Still the same.



[CODE]C:\Users\Sydekum\Documents\other stuff\srbsieve_full>srbsieve.exe
Error: Could not process line phase=srsieve2,144,30000,300000[/CODE]

rogue 2021-01-14 13:30

1 Attachment(s)
Try this. I modified the logic for how it parses the phase= lines.

MisterBitcoin 2021-01-15 00:43

[QUOTE=rogue;569269]Try this. I modified the logic for how it parses the phase= lines.[/QUOTE]


I will give this a try after the first batch is done, this should be in 2 days.

MisterBitcoin 2021-01-16 15:40

That works now, but i think its not that much faster as i have hoped.

[CODE]01/13/21 19:12:33 srsieve started: 1 <= n <= 144, 3 <= p <= 581000
01/13/21 19:14:26 srsieve stopped: at p=581000 because --pmax was reached.[/CODE]srsieve1

[CODE]2021-01-16 16:33:56: Sieve started: 3 < p < 581e3 with 2735928 terms (1 < n < 144, k*7^n+c) (expecting 2509466 factors)
2021-01-16 16:35:58: Sieve completed at p=581047. Primes tested 47612. Found 2368688 factors. 367240 terms remaining[/CODE]

rogue 2021-01-16 18:08

[QUOTE=MisterBitcoin;569445]That works now, but i think its not that much faster as i have hoped.

[CODE]01/13/21 19:12:33 srsieve started: 1 <= n <= 144, 3 <= p <= 581000
01/13/21 19:14:26 srsieve stopped: at p=581000 because --pmax was reached.[/CODE]srsieve1

[CODE]2021-01-16 16:33:56: Sieve started: 3 < p < 581e3 with 2735928 terms (1 < n < 144, k*7^n+c) (expecting 2509466 factors)
2021-01-16 16:35:58: Sieve completed at p=581047. Primes tested 47612. Found 2368688 factors. 367240 terms remaining[/CODE][/QUOTE]

What is the range of k that you were testing?

You will notice a difference if you can use srsieve2cl when maxp > 1e6 (as the GPU logic comes into play at p > 1e6). If you can sieve more deeply with srsieve2cl then the PRP testing steps should be much faster and the PRP steps are where the most time is spent.

MisterBitcoin 2021-01-16 19:04

[QUOTE=rogue;569470]What is the range of k that you were testing?

You will notice a difference if you can use srsieve2cl when maxp > 1e6 (as the GPU logic comes into play at p > 1e6). If you can sieve more deeply with srsieve2cl then the PRP testing steps should be much faster and the PRP steps are where the most time is spent.[/QUOTE]


With srsieve2 I am now running at k=600M-700M with only 19K´s at once. Both sieve and prp steps take around 2 minutes to 3 minutes.


The server this is running doesnt have any GPU, so this wouldnt come too play. However i am considering to order an GPU server to replace my linux server.

Price would be similiar.

But there isnt any GPU based llr-test software by change? (for this base on top ofc)

MisterBitcoin 2021-01-19 08:44

It seems like srsieve2 is writing the sieve file sorted by k instead of n. Not that this is a problem, but i just wanted to let you know. :smile:

rebirther 2021-01-19 09:05

[QUOTE=MisterBitcoin;569647]It seems like srsieve2 is writing the sieve file sorted by k instead of n. Not that this is a problem, but i just wanted to let you know. :smile:[/QUOTE]

yes, you can use srfile to convert to another file which is sorted. Iam doing that.

rogue 2021-01-19 13:38

[QUOTE=MisterBitcoin;569445]That works now, but i think its not that much faster as i have hoped.

[CODE]01/13/21 19:12:33 srsieve started: 1 <= n <= 144, 3 <= p <= 581000
01/13/21 19:14:26 srsieve stopped: at p=581000 because --pmax was reached.[/CODE]srsieve1

[CODE]2021-01-16 16:33:56: Sieve started: 3 < p < 581e3 with 2735928 terms (1 < n < 144, k*7^n+c) (expecting 2509466 factors)
2021-01-16 16:35:58: Sieve completed at p=581047. Primes tested 47612. Found 2368688 factors. 367240 terms remaining[/CODE][/QUOTE]

srsieve2 will be slower than srsieve for small n (n < 64) because of a primality check it does. If the factor = k*b^n+c, then it will output a message saying that k*b^n+c is prime. I will lower the bounds of this check because it cannot reliably check for primality if k*b^n+c > 2^63.

srsieve2 will be slower than srsieve for smal p because srsieve2 validates factors by default. srsieve will only verify if the -c command line switch is used. This has a noticeable impact when p is small due to the number of terms removed by small p.

rogue 2021-01-19 13:39

[QUOTE=rebirther;569648]yes, you can use srfile to convert to another file which is sorted. Iam doing that.[/QUOTE]

Use -f with srsieve2 to change the format of the output file.

MisterBitcoin 2021-01-23 01:38

Mark, could you post the source from srbsieve here please? I want to try out something.

rogue 2021-01-23 04:44

[QUOTE=MisterBitcoin;569899]Mark, could you post the source from srbsieve here please? I want to try out something.[/QUOTE]

The entire source is in the 7z file I posted.

MisterBitcoin 2021-01-23 13:06

Thanks, i was blind.

I was thinking about to use fbncsieve instead of newpgen and even implement fbncsieve into srbsieve.
This *should* improve the speed for very low n´s. So here is what I am considering:


1. Removing k´s that have k and b odd
2. Removing trivial factorisation and MOB´s
3. Now taking those remaining k´s into fbncsieve for n=1 and sieve until only primes are remaining; repeat that step until x (e.g. 20 for R7 works fine, takes a bit but removal rate is worth it!)
Please note that removing the k´s that have been primed on n=1 should improve the sieve speed for n=2 and so on. (On R7 e.g. 1G has ~24.000.000 primes on N=1)


The reason why is this:
[CODE]Status (00:00:44): Removed 418373 terms from newpgen for n = 15: 5749716 remaining
Status (00:00:45): Removed 379781 terms from newpgen for n = 16: 5369935 remaining[/CODE]


It took me 5 minutes to sieve for n=15, but processing those 418k k´s would take ~2-3 hours. Increasing the n-value would mean it requires and higher p-value; but again less k´s should bring us faster speed.





I am asking for a lot here, but i am certain those changes could bring us quite forward with processing new bases / ranges much faster.

R7 seems to be as prime dense as R3 so running to 1G for now might be worth it; and maybe even consider going deeper in the next years.



Anyway srbsieve is already a powerful tool regardless, but we can improve and keep up improving over the years. I highly value your tireless efforts for this project; and others.

rogue 2021-01-23 13:47

I see what you are saying. I wrote one before the other existed. I'll look into it, but working the bugs out of the next release of srsieve2 is top on my priority list. It's close to working, but crashes and I haven't figured out why yet although I have some ideas.

rogue 2021-02-06 15:11

I looked at the code. It will take a file in ABCD format (abcdfile=) or newpgen format (npgfile=). Right now it supports up 50 entries (up to n=50). Going past has less value because of how it might take to sieve to sqrt(k*b^50) if you consider that most k one would be using this with are at least 30 bits and as any b for this must be >= 7. fbncsieve can output either format.

The only enhancement I could see is modifying srbsieve to execute fbncsieve (like it does srsieve/srsieve2) instead of one needing to run it externally first.

MisterBitcoin 2021-02-06 16:33

[QUOTE=rogue;570988]
The only enhancement I could see is modifying srbsieve to execute fbncsieve (like it does srsieve/srsieve2) instead of one needing to run it externally first.[/QUOTE]


Yep, i think so aswell. However i dont know up to which n-value it is still effective or not.

The difference between processing up to n=13 and n=16 was around 6 hours for an k-range of 100M.

I can only do more testing when i start the next range, which might be in around 2-3 months. I expect n=20 to be the best value, but we will see.

rogue 2021-02-06 18:27

1 Attachment(s)
I added code to support this suggestion but have done zero testing. You can access it by adding the line "maxNfbncsieve=" to the ini file and specifying the max n before switching to srsieve/sr2sieve/sr2sievecl. Right now it doesn't delete the ABCD files created by fbncsieve. I'll do that after we know that it is working.

MisterBitcoin 2021-02-06 20:00

I will test it once i have free resources and will let you know

rogue 2021-03-14 23:28

[QUOTE=MisterBitcoin;571025]I will test it once i have free resources and will let you know[/QUOTE]

Any updates on your testing of this?

rogue 2021-03-17 17:54

1 Attachment(s)
I found one bug when using fbncsieve. That is now fixed. I have been playing with starting a new base with it (S1020). It is nice to have to avoid the extra steps of newpgen before running srbsieve. As for S1020, this is going to take some time on a single core because 1020 is divisible by 2, 3, and 5, which means that a lot more candidates survive sieving than for other bases.

If you have a fast GPU, I suggest that you compare the speed of the GPU with the speed of the CPU to determine if srsieve2cl or srsieve2 will benefit you more.

rogue 2021-04-12 17:58

It would be nice if [url]http://www.noprimeleftbehind.net/crus/vstats_new/crus-unproven.htm[/url] either had a link to the remaining k or a link to a preserved file or both.

rogue 2021-04-16 21:24

[QUOTE=pokemonlover123;576021]How would I be able to help? I haven't done this before but if I can I'd like to try to assist.[/QUOTE]

You have two options depending on how much effort you want to put into it. The first option is to install BOINC. You can find details in a thread of this subform and here: [url]https://srbase.my-firewall.org/sr5/forum_index.php[/url]. There is a subproject for R3 on BOINC. The other option is for those who want to learn more about the software used by CRUS then read on.


You can find pre-sieved ranges here: [url]http://www.noprimeleftbehind.net/crus/Riesel-conjecture-reserves.htm[/url] with k from 45G to 60K unserved at n=50000.

Take a range of 1G k. With NotePad++ split the file so that you split the k across as many files as you have cores. Since this file is an ABCD file each of your files will have ABCD as the first line and have approximately the same number of ABCD lines. If you are familiar with NotePad++ this shouldn't take too long. If not, then it might take a little trial and error on your part to figure this out. You will need to convert each of the new files to ABC format with the number_primes switch on the first line. You can do this with srfile or with srsieve2. With srfile you need to convert to pfgw format. This will create a file with ABC on the first line and one k/n pair on each successive line. Replace that first line with this:

[CODE]ABC $a*3^$b$c // {number_primes,$a,1}[/CODE]

The other option is to use srsieve2 with the -fP option. Just use ^C shortly after you start srsieve2 (once it starts sieving) since you do not need to sieve more deeply. This will automatically put that ABC line as you need at the top of the file. The number_primes switch will ensure that you don't continue testing k after a prime is found. The other advantage of srsieve2 is that it will sort by k then n, not n then k. This will make it far easier for you to estimate how many days are left to test the range. You can do this with the srfile output, but you need a command line sort or use TextPad. Just make sure that the ABC line is the first line in the first.

Once you have your ABC files create a folder/directory for each file with a copy of llr in that folder/directory. You want one folder/directory per core. On Windows use Console2 to open a tab for each folder, then start llr with the ABC file as the input file (I forgot the command line switch for that). You can use the Windows CMD.EXE, but that creates one window per process. Console2 allows you to run multiple CMD.EXE from the same window with each instance as one tab in that window.

If you want to get even fancier than this setup and have some familiarity with databases, you can install MySQL or PostgreSQL and use PRPNet. This is my preferred setup because I have multiple computers and I don't need to monitor them to prevent them from running out of work to do. I typically have two databases set up and two instances of the PRPNet server so that once on server runs out of work, the other server will have something queued up and the clients won't run out of work. Each client is configured to talk to both servers.

If you have 4 cores, then a range of 1G should take about two months regardless of your setup. With my 30 cores I got this down to about 6.5 days for a 1G range.

One more thing to note, for my setup it took about 5.5 seconds per test (with nearly 30% of the tests skipped due to finding a prime). You might need to sieve more deeply before you start testing. srsieve2 and srsieve2cl are your best options. srsieve2 if you don't have a GPU or have a weak GPU and srsieve2cl if you have a powerful GPU. No guarantee that you can run on a powerful GPU because it does require a lot of GPU memory so you might need smaller numbers of k to sieve on a GPU. Fortunately I have one GPU that can sieve a range of 1G k at time.

If you have any questions, feel free to ask. Best of luck in your hunt.

pokemonlover123 2021-04-17 03:15

[QUOTE=rogue;576026]If you have any questions, feel free to ask. Best of luck in your hunt.[/QUOTE]
I'm having a bit of trouble finding where to get srfile. Could you point me to a download location for it?

rogue 2021-04-17 12:52

[QUOTE=pokemonlover123;576043]I'm having a bit of trouble finding where to get srfile. Could you point me to a download location for it?[/QUOTE]

srfile is bundled with srsieve (not sr1sieve or sr2sieve). Check this [URL="https://www.mersenneforum.org/showpost.php?p=551180&postcount=275"]post[/URL].

pokemonlover123 2021-04-17 20:36

Alright! I believe it's working! I have the prpadmin tool feeding the candidates in the 45G range to my prpserver (got that working it looks like). Once that finishes I'll see if it works when starting the clients. Once I get that working I'll send a message reserving the 45G range.

pokemonlover123 2021-04-17 20:37

[QUOTE=rogue;576052]srfile is bundled with srsieve (not sr1sieve or sr2sieve). Check this [URL="https://www.mersenneforum.org/showpost.php?p=551180&postcount=275"]post[/URL].[/QUOTE]
Thanks. Figured everything out now i believe.

pokemonlover123 2021-04-17 22:45

I do have one more question, is there a way to load the tests into my PRPNet server faster than using the prpadmin tool?

rogue 2021-04-17 23:42

[QUOTE=pokemonlover123;576074]I do have one more question, is there a way to load the tests into my PRPNet server faster than using the prpadmin tool?[/QUOTE]

Unfortunately no. It might take a hour or two, but the server can start handing out work until you have a few thousand loaded (IIRC). At one point I was thinking about allowing it from the command line when starting the server, but I never coded it.

pokemonlover123 2021-04-18 00:41

I managed to speed up the loading of candidates quite a bit by moving the data directory for the database onto my SSD, so hopefully that'll help in the long run.

pokemonlover123 2021-04-18 02:26

I'm running into a very strange issue... When my prpclients request work, they sometimes crash silently with an access violation exception (which i saw in event viwer). I managed to enable user memory dumps for crashing programs and used WinDBG to figure out they are crashing with some variation of an invalid pointer (invalid read/write to invalid pointer at different addresses). Is this known or am i using an old version? How would i go about submitting these dumps in a bug report? I'm currently running a memory diagnostic to rule out hardware issues. I assume this means I will have to rerun all the tests I've already done? or is the program resilient against hardware/software bugs in regards to its results?

pokemonlover123 2021-04-18 13:16

Doesnt look like its a hardware issue

rogue 2021-04-18 13:18

[QUOTE=pokemonlover123;576086]I'm running into a very strange issue... When my prpclients request work, they sometimes crash silently with an access violation exception (which i saw in event viwer). I managed to enable user memory dumps for crashing programs and used WinDBG to figure out they are crashing with some variation of an invalid pointer (invalid read/write to invalid pointer at different addresses). Is this known or am i using an old version? How would i go about submitting these dumps in a bug report? I'm currently running a memory diagnostic to rule out hardware issues. I assume this means I will have to rerun all the tests I've already done? or is the program resilient against hardware/software bugs in regards to its results?[/QUOTE]

I'm impressed that you went the PRPNet route and am happy to see that you have it working.

Temp files are created to retain results so a crash won't cause problems unless one of those temp files is corrupted, but you would know that when you restart the client. Is the client crashing or is the server crashing? Which version of the client are you using? 5.4.5 is on sourceforge. I've been running it on Windows without problems.

pokemonlover123 2021-04-18 13:41

[QUOTE=rogue;576120]I'm impressed that you went the PRPNet route and am happy to see that you have it working.

Temp files are created to retain results so a crash won't cause problems unless one of those temp files is corrupted, but you would know that when you restart the client. Is the client crashing or is the server crashing? Which version of the client are you using? 5.4.5 is on sourceforge. I've been running it on Windows without problems.[/QUOTE]
Lemme try 5.4.5. I have 5.4.0a for some reason. It was the clients that were crashing. The server is working fine. I'll keep you posted. I'm glad I don't have to redo everything.

pokemonlover123 2021-04-18 15:07

Doesn't look like 5.4.5 fixes the issue.

pokemonlover123 2021-04-18 16:21

It's not confirmed, but i think i figured it out? It seems to be the case that the crashes happen if you set the client to request a large number of work units at once in prpclient.ini. I had set it to 600 to reduce the frequency of work requests and thats when i noticed the issue started. I have reduced it to 100 on that suspicion and it seems to br working fine now? Will keep you updated. Seems requesting a large number of work units might overrun a heap-allocated buffer on the client?

pokemonlover123 2021-04-18 17:24

[QUOTE=pokemonlover123;576136]overrun a heap-allocated buffer on the client?[/QUOTE]
Note, I forgot to mention this, that after I updated to 5.4.5 the dumps changed from reported errors related to invalid pointers to heap corruption errors, which led me to the heap buffer overrun hypothesis.

pokemonlover123 2021-04-18 17:40

I have a couple questions now that I have resolved the crashing issue. 1) It seems that not all n values have all ks from the sieve file when i converted it to PFGW format. Am I correct in assuming that is on purpose (i.e. is ittrivial to preclude certain ks for certain n values from testing?). 2) It seems that (at least according to entries in the results table) the program it is using to run the PRP tests is llr rather than pfgw, even though 3 is not a power of 2 and the option to use llr anyways is not enabled in the server's ini. Will this cause problems or is it also on purpose?

rogue 2021-04-18 18:11

[QUOTE=pokemonlover123;576139]I have a couple questions now that I have resolved the crashing issue. 1) It seems that not all n values have all ks from the sieve file when i converted it to PFGW format. Am I correct in assuming that is on purpose (i.e. is ittrivial to preclude certain ks for certain n values from testing?). 2) It seems that (at least according to entries in the results table) the program it is using to run the PRP tests is llr rather than pfgw, even though 3 is not a power of 2 and the option to use llr anyways is not enabled in the server's ini. Will this cause problems or is it also on purpose?[/QUOTE]

If you count the number of rows in the Candidate table, it should match the number of k/n pairs from the ABCD file. The number of rows in the CandidateGroupStats table should match the number of distinct k from the ABCD file. When you ran srfile did it generate the name number of candidates as it read? Were all of them loaded into the server? If you notice a discrepancy, please track it down because I haven't seen this particular issue.

The default setting in the server for usellroverpfgw is 1. You can set to 0 to force all clients to use pfgw. You can use the stats to determine which is truly faster. The main problem is that if pfgw finds a PRP it will need to run a second test to verity primarily whereas llr will not require a second test, although the one test it runs might take longer.

You just want to ensure that you have onekperinstance=1 in preserver.ini so that multiple clients are not working on the same k concurrently, which can lead you to running more tests than you need to run.

I set my clients to get a maximum of 100 tests at a time. More seems to cause problems. I have thought about solving this by switching the protocol between client and server to something more flexible, such as JSON, and by making the socket logic more robust, but nobody has been pressing for that.

pokemonlover123 2021-04-18 18:41

[QUOTE=rogue;576141]If you count the number of rows in the Candidate table, it should match the number of k/n pairs from the ABCD file. The number of rows in the CandidateGroupStats table should match the number of distinct k from the ABCD file. When you ran srfile did it generate the name number of candidates as it read? Were all of them loaded into the server? If you notice a discrepancy, please track it down because I haven't seen this particular issue..[/QUOTE]
srfile did generate the same as it read. I was referring to the generated file itself as opposed to what's in the server (as i'm still loading candidates into the server, since that process was interrupted last night cause of the memory test i ran). In the pfgw file, what i noticed is that not all ks have tests at all ns. The number of unique ks in my server rn is 3294, which matches the number of ks remaining for the 45G range as checked against the list of remaining ks provided at noprimeleftbehind. Assuming there were candidates for all ks remaining at all values of n, that would mean 50k (ns) * 3k (ks) = 150 million tests. There aren't even that many lines in the sieve file I downloaded, so I don't think there's a problem. I believe the rows in the candidate table will end up matching the number of k/n pairs, but my question was regarding the fact that not all possible k/n pairs appear in the original file.

pokemonlover123 2021-04-18 18:46

I believe I managed to find the answer to my question. In the ABCD file, for each k, there seems to be a starting n (at the end of the ABCD line for each k, in square brackets), then a whole bunch of numbers. I realized that those numbers match exactly the gaps between tested n values I was wondering about. I suppose I just misunderstood the ABCD format. And I suppose the certain n candidates that are not tested were sieved out, hence the sieving in the first place. I misunderstood the process and forgot to consider that sieving had happened and that I was generating candidates FROM that sieve file.

rogue 2021-04-18 19:53

[QUOTE=pokemonlover123;576150]I believe I managed to find the answer to my question. In the ABCD file, for each k, there seems to be a starting n (at the end of the ABCD line for each k, in square brackets), then a whole bunch of numbers. I realized that those numbers match exactly the gaps between tested n values I was wondering about. I suppose I just misunderstood the ABCD format. And I suppose the certain n candidates that are not tested were sieved out, hence the sieving in the first place. I misunderstood the process and forgot to consider that sieving had happened and that I was generating candidates FROM that sieve file.[/QUOTE]

Yes. The file that you started with had already been sieved to 35G so a large percentage of possible candidates had already been removed.

ABCD format is used because it is the most compact format. But PRPNet does not support that format. I have wanted to add support, but have not done so as nobody has asked.

gd_barnes 2021-04-18 21:26

One thing that I will bring up that I don't think was mentioned here about the PRPnet server. Make sure that servertype=1 for Sierpinski/Riesel in the prpserver.ini file. This will stop searching a k after a prime has been found. Otherwise you will do a lot of extra tests.

pokemonlover123 2021-04-18 22:05

[QUOTE=gd_barnes;576158]One thing that I will bring up that I don't think was mentioned here about the PRPnet server. Make sure that servertype=1 for Sierpinski/Riesel in the prpserver.ini file. This will stop searching a k after a prime has been found. Otherwise you will do a lot of extra tests.[/QUOTE]
Yep I set that option.

gd_barnes 2021-08-23 20:21

[QUOTE=rebirther;586329]R546 tested to n=2.5k + sieved to 1G (2.5-10k)

75963 remain

cant sieving higher than 715M

Results emailed - Base released[/QUOTE]

srsieve2 must not be working correctly. The file shows that it is sieved to P=715754497. Yet when I run srsieve on the file it is not removing factors at that sieve depth. This means that the file has already been sieved deeper than that.

I will attempt to sieve the file to P=1G using srsieve and see what point it starts removing factors.

I see that you have been having difficulty sieving to P=1G with srsieve2 for the last several large files with nearly 100,000 k's remaining. It might be time to consider srsieve again.

rebirther 2021-08-23 20:27

[QUOTE=gd_barnes;586342]srsieve2 must not be working correctly. The file shows that it is sieved to P=715754497. Yet when I run srsieve on the file it is not removing factors at that sieve depth. This means that the file has already been sieved deeper than that.

I will attempt to sieve the file to P=1G using srsieve and see what point it starts removing factors.

I see that you have been having difficulty sieving to P=1G with srsieve2 for the last several large files with nearly 100,000 k's remaining. It might be time to consider srsieve again.[/QUOTE]

The first time it has stopped earlier. Tried 2 times with input file to resume from the last position with -W16 but running with the version from January, newer versions from srsieve2 has some issues.

gd_barnes 2021-08-23 20:54

[QUOTE=rebirther;586343]The first time it has stopped earlier. Tried 2 times with input file to resume from the last position with -W16 but running with the version from January, newer versions from srsieve2 has some issues.[/QUOTE]

I just now split the file over 4 cores running good old srsieve. It will finish sieving P=715M-1G in ~8-9 hours. I will let you know at what point it starts removing factors.

I get concerned about running buggy versions of software and using the files from those versions for future testing.

Edit: It has stopped early before. Both R358 and S330 stopped before P=1G earlier this year.

gd_barnes 2021-08-24 01:08

Reb, below are the 3 times that you could not sieve where you needed to with srsieve2 when there was a large number of k's remaining. One of these is your recent post about R546.

[QUOTE=rebirther;585829]R358 tested to n=2.5k + sieved to 1G (2.5-10k)

265552 remain

sieved stopped at 964M because of 1 factor per 7000s+

Results emailed - Base released[/QUOTE]

[QUOTE=rebirther;576273]S330 tested to n=2.5k + sieved to 1G (2.5-10k)

101096 remain

Results emailed - Base released

Sieving ended up at 32M after 2 runs with -t10 and -t16[/QUOTE]


[QUOTE=rebirther;586329]R546 tested to n=2.5k + sieved to 1G (2.5-10k)

75963 remain

cant sieve higher than 715M

Results emailed - Base released[/QUOTE]

I did short multiple sieves on all of the files to see what their actual approximate sieve depth is. Here is what I found.

1. R358 is sieved to P=~964M like you stated in your post. But the file shows that it was sieved to P=694851893. Why is the file wrong? My sieves confirmed that P=~964M is correct. But the factor removal is faster than 1 per second. I don't know why you are showing one every 7000 secs.

2. S330 appears to be sieved to between P=300M and 350M somewhere. But you and the file show that it was sieved to P=32M. I'm getting ZERO factor removal at P=300M but many factors being removed per second at P=350M.

3. R546 appears to be sieved to P=1G. But you and the file show that it was sieved to P=715M. I'm getting no factor removal at various tests for P>715M including at P=990M. For additional verification I confirmed that factors were being removed at P>1G.

So I'm confused what is happening.

Below are my suggestions of what we should do. Please feel free to offer alternatives.

1. I can finish sieving R358 to P=1G.

2. For the S330 sieve depth, what you stated and what is in the file are very far off from the actual sieve depth. I feel like you should maybe start over the sieving of that one with a version of srsieve2 that you know is working properly.

3. As I stated above I will continue sieving R546 using srsieve for P=715M-1G to see if there are any missing factors. That effort will be complete in ~4 hours from this post. I do not expect to find any factors. If none are found I will simply update the sieve depth in the file.

I hope that future versions of srsieve2 are able to handle large numbers of k's remaining. Perhaps srsieve is better suited for such bases.

gd_barnes 2021-08-24 05:22

Confirmed #3 in my last post: The file provided to me for R546 was already sieved to P=1G.


All times are UTC. The time now is 06:49.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.