![]() |
|
|
#1 |
|
Nov 2003
22×5×373 Posts |
I understand that NFSNET will do 2,764+ after they do 2,761-.
I have another week of sieving for 2,1382M and then perhaps 2 weeks for 2,1402L. I will then start on 2,764+ with my lattice sieve while NFSNET proceeds with a line siever. What parameters shall we use? 4x^6 + 1 is a good choice for the polynomial, but we should decide on factor base sizes and LP bounds. Will a factor base bound of 32million be sufficient? At the current rate, I will get started before 2,761- finishes. I will compress my data and send it to RKW via CD. I expect a lot of duplicates; this is an experiment. Bob |
|
|
|
|
#2 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
I notice Wacky is reading this thread as I type, so I'll leave him to comment on any other aspects. Paul |
|
|
|
|
|
#3 | |
|
Nov 2003
746010 Posts |
Quote:
2^-127 is better? I think it will be worse. Consider the linear norm. It will be 2^127 a - b (versus a - 2^127 b). For much of the sieve region, a is quite a big bigger than b, resulting in bigger linear norms. Yes, a^6 - 4b^6 will be slightly smaller than 4a^6 - b^6, but not enough smaller to make up for the larger rational side norms. Discussion please. With a bound of 40M, I will need to recompile my lattice siever. I will lose some machines as a result, because they only have 1/2 Gbyte of memory. I can't run in 1/2G with a factor base this large. May I suggest that I use a bound of only 32M? |
|
|
|
|
|
#4 | |
|
Nov 2003
22×5×373 Posts |
Quote:
Comments Please????? |
|
|
|
|
|
#5 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
10,753 Posts |
Quote:
Summary: running with FB primes to 32M should be fine. You won't find as many relations but what you find are as good as any. The sieving area is close to square in that we expect to have b<40M with |a|<45M. The reciprocal polynomial gives skewness >1 (forget the exact figure, around 1.2) and although I didn't test yields on this occasion, every previous time gave slightly better yields with skew>1 than skew<1. When the skewness is so close to unity it doesn't much matter anyway. Using half gig RAM sounds excessive. The CWI line siever takes about 80M and Franke's lattice siever about 130M. What are you doing that requires 4-6 times as much memory as the opposition? I hope this post appears ... Paul Last fiddled with by xilman on 2005-11-16 at 08:25 |
|
|
|
|
|
#6 | |
|
Nov 2003
11101001001002 Posts |
Quote:
I recall a number closer to 800M. A square sieve area is *sub optimal*. Read my paper. Near the origin you should be sieving ~|a| < 150M. (or perhaps even more) This should drop to |a| < 75M for b > 100K, then drop to 50M for b > 1000K, then gradually drop to ~25 to 30M as b > 10M. |
|
|
|
|
|
#7 |
|
Jun 2003
The Texas Hill Country
21018 Posts |
Bob,
Sorry that I have been unable to reply sooner. I finally closed on the purchase of my new home. My servers are currently spread across 3 counties. I'm exhausted from the effort, and I have just begun to move. There are some false assumptions that you have made. These are likely the result of your reasonable interpretation of misleading data. We have already started sieving on 2,764+ and have collected over 10M relations to date. I had hoped to have completed the sieving on 2,761- by now. However, 80M relations looks as if it will produce a matrix that is too large. With part of the participating sievers, I'm now collecting additional relations in hopes of reducing that matrix size. However, the majority are working on 2,764+. I'm somewhat concerned about the "duplication" rate caused by running both lattice and line sievers. My only previous experience produced about 30% duplicates. But that was done with special-q outside the factor base. My other concern with lattice sieving is the resulting matrix size. Because of the way lattice sieving produces relations, it virtually assures that each special-q will survive to the final matrix. |
|
|
|
|
#8 | |
|
Jun 2003
The Texas Hill Country
32×112 Posts |
Quote:
Driven by administrative and factor base overheads (we don't currently share factor bases between sub projects), I have usually limited the number of areas to 2 or 3. I base the transitions on the marginal cost comparing the yield rate of additional width in the lower strata to that of extending the maximum "b". |
|
|
|
|
|
#9 | |
|
Nov 2003
22×5×373 Posts |
Quote:
The reason I use 'special-q' from inside the factor base is precisely because it does NOT lead to larger matrices. But it does result in more duplicates. Most primes inside the factor base wind up in the final matrix anyway. The NFSNET web site reports that 2,761- is less than 60% done. Perhaps that web page needs to be made current??? It also failed to announce that e.g. 2,719+ had been done.... I am about 2/3 done with 2,1382M and planned to do 2,1402L before helping out on 2,764+. Based on the *public* report of progress for 2,761-, I would have finished 2,1402L about the same time 2,761- finished........ I am, even as I type this, running some experiments to see which of the two proposed polynomials (mine or yours) will have higher yields. However, since you have already started 2,764+, the issue is moot. Given your concerns, I will work on something else. I think I will do 2,781- after 2,1402L. Bob |
|
|
|
|
|
#10 | ||
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
101010000000012 Posts |
Quote:
Quote:
I'm well aware of the sub-optimality of a rectangular sieving area, having read your paper and seen the work of a number of other researchers. As Richard said, we use a top-hat region because that is (in our experience) a good trade-off between efficiency and practicality. Arjen Lenstra describes the optimal area as a "crown" (think fairy-tale kings & queens) but, as far as I know, he still sieves over a simple rectangle. Paul |
||
|
|
|
|
#11 | |
|
Bamboozled!
"πΊππ·π·π"
May 2003
Down not across
101010000000012 Posts |
Quote:
The question remains, though: why does your lattice siever take six times the memory of the CWI siever when factoring integers with comparable sized factorbases? Is it, perhaps, that you keep an entire line in RAM? We sieve in sections, finding that the small overhead incurred is greatly outweighed by the number of machines that can contribute to the sieving. By and large, I'd rather have twice as many machines running at 90% efficiency than have half of my potential pool of sievers not doing anything useful because they don't have enough memory. Paul |
|
|
|
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| At least one upcoming paper on polsel | Dubslow | Msieve | 0 | 2016-03-16 06:24 |
| Thoughts about upcoming LLRnet rallies | gd_barnes | No Prime Left Behind | 59 | 2008-06-07 03:54 |
| Upcoming features | Xyzzy | Forum Feedback | 1 | 2007-11-26 18:57 |
| Upcoming Prime95 monsters (processors) | Dresdenboy | Hardware | 96 | 2007-05-16 07:06 |
| Upcoming INTEL chips????? | georgekh | Hardware | 28 | 2004-11-20 03:53 |