View Single Post
Old 2004-06-06, 21:35   #7
dave_dm
 
May 2004

10100002 Posts
Default Experimental analysis :)

OK here's some experimental analysis. I computed the group order of 6014 curves over the 30-digit prime p = 19679309647679934749999111. Of these, 16 were (B1=50000, B2=11757135)-smooth and 88 were (B1=250000, B2=116469998)-smooth.

Let a = 1-16/6014 and b = 1-88/6014. If we sample from the generated set of curves then the probability of success with 500 curves at B1=250000 is

1 - a^500 = 0.9993718853.

Now suppose we take 240 curves which are not (50000, 11757135)-smooth. If we continue these to B1=250000 then how many additional 'fresh' curves do we need to compute to B1=250000 to achieve the same probability of success? Well, assuming my brain's in gear, conditional probability tells us that we need n such that

1 - b^(240+n) / a^240 = 1 - a^500,

so the required n is 304.

It remains to see which strategy takes more time. On my box, running 500 curves to B1=250000 takes 2960s. Running 240 curves from B1=50000 to B1=250000 and then 304 fresh curves to B1=250000 takes 3067s.

So we conclude that, for this 30-digit prime, if we continue the saved B1=50000 residues then it takes 3.6% more time to achieve the same probability of finding p than if we run fresh curves to B1=250000. I reckon this is mainly due to the stage 2's duplicating work, which cannot be avoided.

This argument extends by handwaving to the conclusion that it's probably never worth continuing failed residues.

Dave
dave_dm is offline   Reply With Quote