View Single Post
Old 2007-05-12, 15:00   #5
Citrix
 
Citrix's Avatar
 
Jun 2003

112·13 Posts
Default

Quote:
Originally Posted by VJS View Post
Cirtix,

What type of speed increase are you expecting from this implementation? If it's not more than 400% I would say that it's a bad idea.

I'm assuming that your implementation is basically a "arrayed" p-1.

The only reason I say this is that the factor density basically halves for every double in p as you know. In addition we will eventually have to go back and "resieve" those ranges to pick up the missed ones.

Also the "resieve" will not be any faster than it would have been without your first brush implementation.

The only way I see this as a benifit is for projects like SoB where n is very large. In this case one may be able to test a very small range of n using your method with great speed. (Assuming n-range is directly proportional to speed.) One could "sieve" between 14M<n<15M for all smooth factors to 2^62 very quickly before they are prp'ed for the first time.

I'm more curious about how much of a speed increase you would expect?

The main reason for implementing this algorithm is to save PRP tests. Lars had posted on one of the threads that we had done over 10,000 tests for which a factor was later found. I would like to find these easy factors first and avoid doing some of these tests, thus finding primes faster.

If we find primes faster and have fewer k's in the sieve, the whole project will then speed up.

I don't know how fast the project will become overall (sieve+prp), but if we save 10,000 tests we are saving almost 100 days of PRP work.

I expect the sieve client to be atleast 20 times faster. BTW what is "I'm assuming that your implementation is basically a "arrayed" p-1." Could you describe this, so I know if I am talking about the same thing or not.

Last fiddled with by Citrix on 2007-05-12 at 15:04
Citrix is offline   Reply With Quote