A question on lattice sieving
Hello everyone.
I've got a NFS question. I'm looking at the options for the lattice siever, and noticed the option to sieve over the rational Q's or the algebraic Q's. (This is how I understand it so far. Feel free to correct me if I'm off.) In both of the perl scripts I've seen, as well as looking back at several of the sieve reservation threads on here, I mostly see lattice sieving done on the algebraic side. I was wondering what is the reason for this. Is it that it's more likely for the rational side to be smooth? Or is there some other explanation related to factor base size or other parameter choice? 
I suppose I usually sieve over algebraic specialQ out of habit; if asked to justify myself, the algebraic side for GNFS jobs is generally much larger than the rational side, and I believe it makes sense to use the specialQ to render effectively smaller the numbers which started off largest, but that's not an answer for why I use a pretty much universally in SNFS cases.
I haven't done the experiments to see how much duplication there is in a case with roughly equalsized rational and algebraic side if you sieve on both sides; it might be sensible as a way to push yields up on SNFS problems with particularly intractable polynomials. 
[QUOTE=fivemack;129439]I suppose I usually sieve over algebraic specialQ out of habit; if asked to justify myself, the algebraic side for GNFS jobs is generally much larger than the rational side, and I believe it makes sense to use the specialQ to render effectively smaller the numbers which started off largest, but that's not an answer for why I use a pretty much universally in SNFS cases.
I haven't done the experiments to see how much duplication there is in a case with roughly equalsized rational and algebraic side if you sieve on both sides; it might be sensible as a way to push yields up on SNFS problems with particularly intractable polynomials.[/QUOTE]I generally sieve with specialQ on the side which has the typically larger norms. As the side with the specialq is guaranteed to be divisible by q, the remaining portion required to be smooth is correspondingly reduced. Paul 
It seems to be a wash, in my narrow experience
I sieved on the rational side for two SNFS jobs around 180 digits, with nice quintics and reasonable linear polynomials. I noticed no difference in performance.

Sieve the specialq on the side that has the larger norms. If your SNFS polynomial is wellsuited to the number you're factoring (degree 5 for difficulty ~170, degree 6 for difficulty ~240), the norms on both sides will be very close in size and you can sieve either side (or even both). With lopsided polynomials such as degree 6 for relatively small numbers, or degree 4 for relatively large ones, choosing the specialq on the "right" side has considerable impact on yield. For GNFS, the algebraic side is usually the larger one.
Alex 
i am doing a 120 digit gnfs with factMsieve.pl and i have sieved to such a high q 5000000 that sieving is beggining to slow down and i still dont have enought relations i think this is becuse i set it to use gnfslasieve4I12e instead of changing to gnfslasieve4I13e like it suggested
to rectify my mistake can i just stop the script and change it to sieve on the rational side which has an equal size factorbase size if i do that i think that i will have to reduce the q value back to where it started from do i just need to change the job file to make the script change to a lower q, do i have to change gnfs.log or is there anything else tthat i need to change also does anyone know how many duplicates will be found by sieving on both sides the parameters for the factization are: [code] n: 164662226981356372690290697081101223039515396404303411217671854407296529397066908531205522349477673086175341704695478399 m: c5: 600 c4: 2482582436 c3: 524462069639414 c2: 491993485807382001721 c1: 37023560929310523276836154 c0: 13433130680701991004186914629800 Y1: 2257845554887 Y0: 193951316015185993925111 skew: 478166.41 rlim: 4500000 alim: 4500000 lpbr: 27 lpba: 27 mfbr: 50 mfba: 50 rlambda: 2.4 alambda: 2.4 [/code] should i now change to gnfslasieve4I13e or will that not be necessary to find enough relations i have so far found about 4.5 mil relations and i need about 3mil realations getting past msieve's singleton remover currently 272k rels get through it any help would be be much appreciated i have done some tests and it appears that u find more relations in the same time if mfbr and mfba are 54 rather than 50 do i just change that in the job file i have a habit of being over wordy and not making myself understood please ask questions if something i have said confuses u before people start asking what the number is it is RHP585_70 
All times are UTC. The time now is 23:32. 
Powered by vBulletin® Version 3.8.11
Copyright ©2000  2021, Jelsoft Enterprises Ltd.