![]() |
Upcoming Work
I understand that NFSNET will do 2,764+ after they do 2,761-.
I have another week of sieving for 2,1382M and then perhaps 2 weeks for 2,1402L. I will then start on 2,764+ with my lattice sieve while NFSNET proceeds with a line siever. What parameters shall we use? 4x^6 + 1 is a good choice for the polynomial, but we should decide on factor base sizes and LP bounds. Will a factor base bound of 32million be sufficient? At the current rate, I will get started before 2,761- finishes. I will compress my data and send it to RKW via CD. I expect a lot of duplicates; this is an experiment. Bob |
[QUOTE=R.D. Silverman]I understand that NFSNET will do 2,764+ after they do 2,761-.
I have another week of sieving for 2,1382M and then perhaps 2 weeks for 2,1402L. I will then start on 2,764+ with my lattice sieve while NFSNET proceeds with a line siever. What parameters shall we use? 4x^6 + 1 is a good choice for the polynomial, but we should decide on factor base sizes and LP bounds. Will a factor base bound of 32million be sufficient? At the current rate, I will get started before 2,761- finishes. I will compress my data and send it to RKW via CD. I expect a lot of duplicates; this is an experiment. Bob[/QUOTE]Polynomials are x^6+4, 2^{127}x-1 (i.e. reciprocal polynomials), FB=40M on each side, LPB=1G. I notice Wacky is reading this thread as I type, so I'll leave him to comment on any other aspects. Paul |
[QUOTE=xilman]Polynomials are x^6+4, 2^{127}x-1 (i.e. reciprocal polynomials), FB=40M on each side, LPB=1G.
I notice Wacky is reading this thread as I type, so I'll leave him to comment on any other aspects. Paul[/QUOTE] Is it clear that sending 4x^6 + 1, root = 2^127 to x^6 + 4, root = 2^-127 is better? I think it will be worse. Consider the linear norm. It will be 2^127 a - b (versus a - 2^127 b). For much of the sieve region, a is quite a big bigger than b, resulting in bigger linear norms. Yes, a^6 - 4b^6 will be slightly smaller than 4a^6 - b^6, but not enough smaller to make up for the larger rational side norms. Discussion please. With a bound of 40M, I will need to recompile my lattice siever. I will lose some machines as a result, because they only have 1/2 Gbyte of memory. I can't run in 1/2G with a factor base this large. May I suggest that I use a bound of only 32M? |
[QUOTE=R.D. Silverman]Is it clear that sending 4x^6 + 1, root = 2^127 to x^6 + 4, root =
2^-127 is better? I think it will be worse. Consider the linear norm. It will be 2^127 a - b (versus a - 2^127 b). For much of the sieve region, a is quite a big bigger than b, resulting in bigger linear norms. Yes, a^6 - 4b^6 will be slightly smaller than 4a^6 - b^6, but not enough smaller to make up for the larger rational side norms. Discussion please. With a bound of 40M, I will need to recompile my lattice siever. I will lose some machines as a result, because they only have 1/2 Gbyte of memory. I can't run in 1/2G with a factor base this large. May I suggest that I use a bound of only 32M?[/QUOTE] Comments Please????? |
[QUOTE=R.D. Silverman]Comments Please?????[/QUOTE]
Something has gone wrong. I posted a lengthy comment, but it seems to have disappeared. Summary: running with FB primes to 32M should be fine. You won't find as many relations but what you find are as good as any. The sieving area is close to square in that we expect to have b<40M with |a|<45M. The reciprocal polynomial gives skewness >1 (forget the exact figure, around 1.2) and although I didn't test yields on this occasion, every previous time gave slightly better yields with skew>1 than skew<1. When the skewness is so close to unity it doesn't much matter anyway. Using half gig RAM sounds excessive. The CWI line siever takes about 80M and Franke's lattice siever about 130M. What are you doing that requires 4-6 times as much memory as the opposition? I hope this post appears ... Paul |
[QUOTE=xilman]Something has gone wrong. I posted a lengthy comment, but it seems to have disappeared.
Summary: running with FB primes to 32M should be fine. You won't find as many relations but what you find are as good as any. The sieving area is close to square in that we expect to have b<40M with |a|<45M. The reciprocal polynomial gives skewness >1 (forget the exact figure, around 1.2) and although I didn't test yields on this occasion, every previous time gave slightly better yields with skew>1 than skew<1. When the skewness is so close to unity it doesn't much matter anyway. Using half gig RAM sounds excessive. The CWI line siever takes about 80M and Franke's lattice siever about 130M. What are you doing that requires 4-6 times as much memory as the opposition? I hope this post appears ... Paul[/QUOTE] According to info I received from Alex, the Franke siever takes OVER 1/2 G. I recall a number closer to 800M. A square sieve area is *sub optimal*. Read my paper. Near the origin you should be sieving ~|a| < 150M. (or perhaps even more) This should drop to |a| < 75M for b > 100K, then drop to 50M for b > 1000K, then gradually drop to ~25 to 30M as b > 10M. |
Bob,
Sorry that I have been unable to reply sooner. I finally closed on the purchase of my new home. My servers are currently spread across 3 counties. I'm exhausted from the effort, and I have just begun to move. There are some false assumptions that you have made. These are likely the result of your reasonable interpretation of misleading data. We have already started sieving on 2,764+ and have collected over 10M relations to date. I had hoped to have completed the sieving on 2,761- by now. However, 80M relations looks as if it will produce a matrix that is too large. With part of the participating sievers, I'm now collecting additional relations in hopes of reducing that matrix size. However, the majority are working on 2,764+. I'm somewhat concerned about the "duplication" rate caused by running both lattice and line sievers. My only previous experience produced about 30% duplicates. But that was done with special-q outside the factor base. My other concern with lattice sieving is the resulting matrix size. Because of the way lattice sieving produces relations, it virtually assures that each special-q will survive to the final matrix. |
[QUOTE=R.D. Silverman]A square sieve area is *sub optimal*. Read my paper. Near the origin
you should be sieving ~|a| < 150M. (or perhaps even more) This should drop to |a| < 75M for b > 100K, then drop to 50M for b > 1000K, then gradually drop to ~25 to 30M as b > 10M.[/QUOTE] NFSNET is using a "top hat" sieving area. Because the CWI line siever does rectangular areas, we treat each project as a set of sub-projects where the sieving width is set to progressively smaller sizes as b increases. Driven by administrative and factor base overheads (we don't currently share factor bases between sub projects), I have usually limited the number of areas to 2 or 3. I base the transitions on the marginal cost comparing the yield rate of additional width in the lower strata to that of extending the maximum "b". |
[QUOTE=Wacky]Bob,
Sorry that I have been unable to reply sooner. I finally closed on the purchase of my new home. My servers are currently spread across 3 counties. I'm exhausted from the effort, and I have just begun to move. There are some false assumptions that you have made. These are likely the result of your reasonable interpretation of misleading data. We have already started sieving on 2,764+ and have collected over 10M relations to date. I had hoped to have completed the sieving on 2,761- by now. However, 80M relations looks as if it will produce a matrix that is too large. With part of the participating sievers, I'm now collecting additional relations in hopes of reducing that matrix size. However, the majority are working on 2,764+. I'm somewhat concerned about the "duplication" rate caused by running both lattice and line sievers. My only previous experience produced about 30% duplicates. But that was done with special-q outside the factor base. My other concern with lattice sieving is the resulting matrix size. Because of the way lattice sieving produces relations, it virtually assures that each special-q will survive to the final matrix.[/QUOTE] The reason I use 'special-q' from inside the factor base is precisely because it does NOT lead to larger matrices. But it does result in more duplicates. Most primes inside the factor base wind up in the final matrix anyway. The NFSNET web site reports that 2,761- is less than 60% done. Perhaps that web page needs to be made current??? It also failed to announce that e.g. 2,719+ had been done.... I am about 2/3 done with 2,1382M and planned to do 2,1402L before helping out on 2,764+. Based on the *public* report of progress for 2,761-, I would have finished 2,1402L about the same time 2,761- finished........ I am, even as I type this, running some experiments to see which of the two proposed polynomials (mine or yours) will have higher yields. However, since you have already started 2,764+, the issue is moot. Given your concerns, I will work on something else. I think I will do 2,781- after 2,1402L. Bob |
[QUOTE=R.D. Silverman]
The NFSNET web site reports that 2,761- is less than 60% done. Perhaps that web page needs to be made current??? It also failed to announce that e.g. 2,719+ had been done....[/quote]Not sure whether Lars reads this. Wacky or I should give him a prod... [QUOTE=R.D. Silverman] I am, even as I type this, running some experiments to see which of the two proposed polynomials (mine or yours) will have higher yields. [/QUOTE]Please post the results. As stated earlier, I didn't test this particular case but previous tests have always shown the skew>1 polynomial to be better when using the CWI line siever, though sometimes only very slightly so. This, presumably, is why Peter put a message into the rootfinder output which recommends using the reciprocal polynomials when skew<1. I'm well aware of the sub-optimality of a rectangular sieving area, having read your paper and seen the work of a number of other researchers. As Richard said, we use a top-hat region because that is (in our experience) a good trade-off between efficiency and practicality. Arjen Lenstra describes the optimal area as a "crown" (think fairy-tale kings & queens) but, as far as I know, he still sieves over a simple rectangle. Paul |
[QUOTE=R.D. Silverman]According to info I received from Alex, the Franke siever takes OVER 1/2 G.
I recall a number closer to 800M.[/QUOTE] I need to check my notes, or fire up the Franke siever again. It appears that my recollection of 130M or so may be incorrect. The question remains, though: why does your lattice siever take six times the memory of the CWI siever when factoring integers with comparable sized factorbases? Is it, perhaps, that you keep an entire line in RAM? We sieve in sections, finding that the small overhead incurred is greatly outweighed by the number of machines that can contribute to the sieving. By and large, I'd rather have twice as many machines running at 90% efficiency than have half of my potential pool of sievers not doing anything useful because they don't have enough memory. Paul |
| All times are UTC. The time now is 00:08. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.