View Single Post
Old 2020-09-03, 17:11   #20
Aug 2020

5410 Posts

Do not split the range of n. That will not speed things up
I realized it's apparently not like that... (My intention was to utilize all four physical cores since the -t option doesn't work under Windows. Won't it speed up overall progress (not progress on that specific n-range) if I run another instance of sr2sieve on one of the idle cores? I'm not really splitting the n-range but rather extending it.)

Like I said a few posts ago, sr2sieve is designed to split over many clients, each with its own range of p to sieve.
I thought about that and apparently I don't really get how sieving works. I thought that BOTH the size of p-range and n-range determine the speed. If that was the case, then it would make sense to work in small increments of -P to gradually decrease the n-range.

Now I think it only depends on the p-range? In that case my attempt was indeed not good and it also explains the way sr2sieve works with the facotors.txt.

So ideally all physical cores would run one instance of sr2sieve on the same abcd input file but with different p-ranges. And removing candidates from the abcd file will only be done at the end of the whole process because the number of candidates doesn't really influence sieving speed?

edit: If n-range really has no noticeable impact on sieving speed, what bounds should I use? Lower bound obviously determined by the size of prime I want to find, but upper bound? Largest number I can see myself to LLR in the future? :)

And I used -u # to have the instances use their own factors, checkpoint and so on files. However, with -c they still all use the same sr2cache.bin, is that intentionally or should I specify different files using -C?

Last fiddled with by bur on 2020-09-03 at 17:39
bur is offline   Reply With Quote