View Single Post
Old 2007-05-12, 13:22   #3
VJS's Avatar
Dec 2004

13·23 Posts


What type of speed increase are you expecting from this implementation? If it's not more than 400% I would say that it's a bad idea.

I'm assuming that your implementation is basically a "arrayed" p-1.

The only reason I say this is that the factor density basically halves for every double in p as you know. In addition we will eventually have to go back and "resieve" those ranges to pick up the missed ones.

Also the "resieve" will not be any faster than it would have been without your first brush implementation.

The only way I see this as a benifit is for projects like SoB where n is very large. In this case one may be able to test a very small range of n using your method with great speed. (Assuming n-range is directly proportional to speed.) One could "sieve" between 14M<n<15M for all smooth factors to 2^62 very quickly before they are prp'ed for the first time.

I'm more curious about how much of a speed increase you would expect?
VJS is offline   Reply With Quote