Quote:
Originally Posted by VJS
Cirtix,
What type of speed increase are you expecting from this implementation? If it's not more than 400% I would say that it's a bad idea.
I'm assuming that your implementation is basically a "arrayed" p1.
The only reason I say this is that the factor density basically halves for every double in p as you know. In addition we will eventually have to go back and "resieve" those ranges to pick up the missed ones.
Also the "resieve" will not be any faster than it would have been without your first brush implementation.
The only way I see this as a benifit is for projects like SoB where n is very large. In this case one may be able to test a very small range of n using your method with great speed. (Assuming nrange is directly proportional to speed.) One could "sieve" between 14M<n<15M for all smooth factors to 2^62 very quickly before they are prp'ed for the first time.
I'm more curious about how much of a speed increase you would expect?

The main reason for implementing this algorithm is to save PRP tests. Lars had posted on one of the threads that we had done over 10,000 tests for which a factor was later found. I would like to find these easy factors first and avoid doing some of these tests, thus finding primes faster.
If we find primes faster and have fewer k's in the sieve, the whole project will then speed up.
I don't know how fast the project will become overall (sieve+prp), but if we save 10,000 tests we are saving almost 100 days of PRP work.
I expect the sieve client to be atleast 20 times faster. BTW what is "I'm assuming that your implementation is basically a "arrayed" p1." Could you describe this, so I know if I am talking about the same thing or not.