View Single Post
Old 2007-07-10, 07:08   #49
Citrix
 
Citrix's Avatar
 
Jun 2003

2·787 Posts
Default

Quote:
Originally Posted by geoff View Post
That ensures last_composite can't be greater than low_end_of_range.

I think the original sieve works fine when x is a power of 2, say 2^y, because last_composite is odd, and so the inverse of x mod last_composite is always found after at most y additions, and so overflow doesn't have to be checked as long as you don't expect to sieve beyond 2^(64-y).

However I think the prime sieve is not the only problem you face. Sr2sieve doesn't do SPH, it doesn't use the Chinese Remainder Theorem, it just knocks out some congruence classes and sends what remains to BSGS. The fastest part of sr2sieve is the BSGS, most of the rest of the code is not particularly flash. But by sieving with smooth p-1 you end up minimising the time in BSGS, and maximising the time spent in the slow code.

So this is the problem: If you restrict sieving to very smooth p-1 then you get bogged down in the slow code. But if you allow less smooth p-1 then the gains over normal sieving are not going to be great enough to justify the effort.

The solution might seem to be to speed up the slow code, but that means a lot of work in all sorts of different areas that is only going to have a small benefit to the normal sieve.
I agree with you. There is no point to speed up the slow code for now. The main goal of this exercise is to look at very smooth P-1 factors and thus reduce the time per factor. This will help save some PRP (First pass+ double check) tests. This might become an alternative to the P_1/P+1/ECM method. Also, as I tested in the sr5sieve forum, using smooth p-1's does actually reduce the time per factor. (without improving the slow code)
Citrix is offline   Reply With Quote