 mersenneforum.org SIQS diagnostics
 Register FAQ Search Today's Posts Mark Forums Read  2019-05-22, 05:25   #45
tabidots

May 2019

2410 Posts Quote:
 Another debug tactic: for a given A,B and sieve block, go through and compute Q(x) for every sieve location and factor all of them completely prior to sieving (by simple trial division with all primes up to sqrt(Q(x)), for example). Then you will know which locations should be found by the sieve process. If a location is not found that should be during the subsequent sieve, you will have a specific example with which to work more closely. Are the appropriate primes hitting that location during the sieve? if not, why not? etc.
I just tried with this with the first two B's of the first A of a polynomial that did not pick up any smooths after sieving, and the results were the same—no smooths. That sounds right, since I am using the same sieve code from MPQS and it does work for smaller numbers. The problem starts with larger inputs, though I don't have any intuition for why this would be the case.

Quote:
 Originally Posted by bsquared For the relations you do find, can you print a couple of them? Maybe something will reveal itself like maybe forgetting to trial divide by small primes that were skipped (I remember doing that in one of my first experimental versions).
Sure. I just ran 100 polynomials (counting each b as a separate polynomial) and got 6 smooths. The first few are as follows (not necessarily in the order they were picked up):

Code:
(note: g(x) = (a*a)x^2 + (2*a*b)x + (b*b-n))

===============
Smooth #1
===============

Polynomial g(x) =
66725635853624133104491845866161 x^2 +
152328794209151404760083205350018 x +
-4493699262681020585053171287604495116568

Smooth found at x = -6057

g( -6057 ) = -2046641900942309291498167743817830894505

LHS (ax+b) = -49467741495615522472

===============
Smooth #2
===============

Polynomial g(x) =
70563600195904987918333753599889 x^2 +
132234389859675357708720776316226 x +
-4493699287668381330025165273238560877368

Smooth found at x = -2988

g( -2988 ) = -3864092420317814097540854938717866384640

LHS (ax+b) = -25091969418551765843

===============
Smooths #3 & 4 (same polynomial)
===============

Polynomial g(x) =
51537154140545974176898100656321 x^2 +
158251595892937553938363990151170 x +
-4493699228136277324690277448438158865064

Smooth found at x = 9618

g( 9618 ) = 275314899814652154417599991477095633600

LHS (ax+b) = 69058049852526267917

Smooth found at x = 11544

g( 11544 ) = 2376171639294564454200188189849665980872

LHS (ax+b) = 82884684887582914631
Is there any useful information that can be gleaned from this?   2019-05-22, 06:23   #46
tabidots

May 2019

23·3 Posts Quote:
 Originally Posted by Till Try a bigger sigma. I also use a normal distribution and found that sigma=1 works well, something like 0,75 is a bit faster, but somewhere around 0.55 it gets unstable.
When you say sigma=1, you mean sigma=1*mean, right?

I think I may have implemented my normal distribution function incorrectly, because I just had something very interesting happen—when I changed the sigma to 1 (not 1*mean), there was one polynomial that picked up 132 smooths! (Of course, the program halted because only one set of qs could satisfy such an excessively strict standard deviation, hahaha)

Admittedly, my understanding of statistics is very poor (it was a requirement at my university even for humanities majors like myself, and I nearly failed). I just copied the function from Wikipedia:

, where

Now, the original function has as input, but only uses in the body of the function. If this isn't a typo, then you can use the function even if you just define the standard deviation, and not the variance, right? (I understand standard deviation but don't really understand intuitively what the variance is, so I'd rather use that if I can.)

The other thing is that I was getting odd behavior when I scaled by . So I took that out.

So when you say "use a sigma of 1", I am interpreting that to mean this: In the case of this 50-digit number, my ideal q, . So . Thus primes in the factor base between fall within 1 standard deviation of the mean. Is that correct?   2019-05-22, 07:55   #47
tabidots

May 2019

23·3 Posts Quote:
 Originally Posted by tabidots when I changed the sigma to 1 (not 1*mean), there was one polynomial that picked up 132 smooths!
On second thought, I think they are all probably duplicates, because on a typical go-round with these parameters, the program will cycle between the same two sets of qs, and the huge spike in smooths is not seen until the second sieving of the same polynomial. That is, they are probably all smooths formed from partials left over by the same x and polynomial. Oops :/   2019-05-22, 14:16   #48
bsquared

"Ben"
Feb 2007

2·1,789 Posts Quote:
 Originally Posted by tabidots I just tried with this with the first two B's of the first A of a polynomial that did not pick up any smooths after sieving, and the results were the same—no smooths. That sounds right, since I am using the same sieve code from MPQS and it does work for smaller numbers. The problem starts with larger inputs, though I don't have any intuition for why this would be the case. Sure. I just ran 100 polynomials (counting each b as a separate polynomial) and got 6 smooths. The first few are as follows (not necessarily in the order they were picked up): Is there any useful information that can be gleaned from this?
I'm guessing you are not using the large prime variation? At least, the relations shown have no large primes. Even so, there should be more full smooths to find. Scaling from data I've gathered, it looks like there should be a minimum of about 0.33 full-smooths per b-poly with your settings. So I'd run the debug idea on more than just a couple B's. Some B's you'd expect to not have any full-smooths, but on average you should see more and 2 tests won't give enough coverage.

If you do run it on a bunch of B's and verify that the sieve code is finding everything it should, then at least you've eliminated that as a source of concern.

Last fiddled with by bsquared on 2019-05-22 at 14:24   2019-05-22, 14:45   #49
Till

"Tilman Neumann"
Jan 2016
Germany

1D916 Posts Quote:
 Originally Posted by tabidots When you say sigma=1, you mean sigma=1*mean, right? I think I may have implemented my normal distribution function incorrectly, because I just had something very interesting happen—when I changed the sigma to 1 (not 1*mean), there was one polynomial that picked up 132 smooths! (Of course, the program halted because only one set of qs could satisfy such an excessively strict standard deviation, hahaha) Admittedly, my understanding of statistics is very poor (it was a requirement at my university even for humanities majors like myself, and I nearly failed). I just copied the function from Wikipedia: , where Now, the original function has as input, but only uses in the body of the function. If this isn't a typo, then you can use the function even if you just define the standard deviation, and not the variance, right? (I understand standard deviation but don't really understand intuitively what the variance is, so I'd rather use that if I can.) The other thing is that I was getting odd behavior when I scaled by . So I took that out. So when you say "use a sigma of 1", I am interpreting that to mean this: In the case of this 50-digit number, my ideal q, . So . Thus primes in the factor base between fall within 1 standard deviation of the mean. Is that correct?

Sorry, I should have looked at my source code.

I am _not_ using a normal distribution.
What I am doing is:
* Given a_best = the theoretically best a-value and q_count = the wanted number of q's, find the index indexCentre of the prime base element such that primeBase[indexCentre]^q_count is nearest to a_best.
* Compute some "variance" as indexVariance = 0.75*sqrt(primeBaseSize)
* Then for each of the first q_count-1 q's, I select a prime base index randomly from the uniform distribution [indexCentre - indexVariance, indexCentre + indexVariance].
* The last q is computed deterministically to match a_best as good as possible.

What I meant with "sigma" is the 0.75 factor.

The procedure seems to be very similar to henryzz's.

Last fiddled with by Till on 2019-05-22 at 15:03 Reason: clarity, last q   Thread Tools Show Printable Version Email this Page Similar Threads Thread Thread Starter Forum Replies Last Post cgy606 Factoring 1 2016-10-21 13:37 henryzz Factoring 2 2013-08-26 23:02 patrickkonsor Factoring 3 2008-10-20 12:03 lythanhnguyen Hardware 3 2007-09-11 06:19 R.D. Silverman Hardware 3 2006-11-17 16:06

All times are UTC. The time now is 21:33.

Tue Oct 19 21:33:27 UTC 2021 up 88 days, 16:02, 0 users, load averages: 1.34, 1.54, 1.54