![]() |
Current Work & A Query
It appears that Greg and Bruce have some very large resources
available to them. (very nice!) and are tackling some very hard numbers. This is terrific. Although I'd like to see them do some of the first few holes, I understand their reasons for not tacking the current "wanted" lists and it seems quite reasonable. I'd like to see an effort to finish base 2 to 800 bits. I am currently sieving 2,1101+ (1/3 sieved) and will then do 2,1104+. After that, I will sieve 2,1538M. [but may need help with the LA; my biggest machine has only 2G]. The remaining 2 numbers less than 800 bits are 2,799+ (SNFS), 2,1538M (I will do; SNFS), 2,1586L (GNFS!), 2,1598L (??), and 2,1598M (GNFS), I exclude for the moment the base 2 numbers with composite exponents above 800 bits that have a usable algebraic factor. I will do some of these eventually. The Query: An interesting question. Is 2,1598L C171 (799 bits by SNFS) easier by SNFS or GNFS? It is close. I suspect the former. Also an observation: Base 11 has been relatively neglected.... |
[QUOTE=R.D. Silverman;133025]After that, I will sieve 2,1538M. [but may need help with the LA; my biggest machine has only 2G].[/QUOTE]I should be able to step in if needed.
Paul |
Base 11 is a bit neglected, but 5,331[+-] are both only just over 768 bits and look like reasonable personal-snfs targets; also 7,277[+-] at 778 bits.
2,821- is the only number on the wanted list that's not claimed, and is I suppose a natural target for [b]mersenne[/b]forum; a bit more work than 3+512, but 3+512 doesn't seem to have been a serious strain. I've reserved it with Sam, and will do some experimental sieving to get parameters tonight. x^6-2 must be the right polynomial, though I'm surprised to find that by at least one measure ([TEX]\sum_{p \in 2 \ldots 10000} \frac{N_{\mathrm {roots}}(f \pmod p)}p[/TEX]) it's worse (score 1.651) than 9x^6+1 (score 1.958). I don't know how you'd account for the contribution from real roots, which x^6-2 has and 9x^6+1 lacks. |
[QUOTE=R.D. Silverman;133025]
2,799+ (SNFS), 2,1538M (I will do; SNFS), 2,1586L (GNFS!), 2,1598L (??), and 2,1598M (GNFS), I exclude for the moment the base 2 numbers with composite exponents above 800 bits that have a usable algebraic factor. I will do some of these eventually. an observation: Base 11 has been relatively neglected....[/QUOTE] The new "who's doing what?" includes [code] 11,251- c208 Childers/Dodson 2,1101+ c211 Silverman ... 2,788+ c219 NFSNET 6,313- c227 NFSNET 10,241- c229 NFSNET [/code] a very hard base-11 (a mistake on my part, I thought that is was still a c258, but ecm hapened to it; so we're doing the c208 anyway). The 788+ was a last minute suggestion from Sam, but it was clearly too small, under two weeks of sieving (with lots of turn-around effort for NFSNET). We're doing 10,241- c229, difficulty 241 mostly as a warm-up (and base-10 affirmative action), difficulty approaching 245 seems to be a more likely range. Note also that NFSNET did do two of the last three under 768-bits; so it's not like we haven't been doing our share of the 1st five holes. As you've noted, the new Childers/Dodson target range is above difficulty 260; but we did do a couple from the 1st five on the way up (notably 12,241 C260 Childers/Wackerbarth/Dodson). I'm on the last range of width 20M (out of 30M-250M) on 3,547-, after which I'll have four large matrices to deal with --- pending full production and data exchange to Greg's new machine, sieving is still way out-running our matrix resources. We clearly ought to be sieving harder/larger numbers so as to give the matrix queue a chance to clear. -Bruce |
I wouldn't be very optimistic that sieving harder numbers will give the matrix queue a chance to clear, unless you deliberately use sub-optimal large prime bounds: I get the strong impression that we're at a stage where harder numbers will have much harder matrices, and msieve is already taking a couple of months on four cores to handle a 20M matrix.
Or possibly block-Weidemann will be available by the time the sieving for a 900-bit number has finished, in which case all bets are off. |
[quote=fivemack;133035]Base 11 is a bit neglected, but 5,331[+-] are both only just over 768 bits and look like reasonable personal-snfs targets.
2,821- is the only number on the wanted list that's not claimed, and is I suppose a natural target for [B]mersenne[/B]forum; a bit more work than 3+512, but 3+512 doesn't seem to have been a serious strain. I've reserved it with Sam, and will do some experimental sieving to get parameters tonight. x^6-2 must be the right polynomial, though I'm surprised to find that by at least one measure ([tex]\sum_{p \in 2 \ldots 10000} \frac{N_{\mathrm {roots}}(f \pmod p)}p[/tex]) it's worse (score 1.651) than 9x^6+1 (score 1.958). I don't know how you'd account for the contribution from real roots, which x^6-2 has and 9x^6+1 lacks.[/quote] I'll willing to contribute to another collaborative effort, or to undertake one of the base 2 numbers suggested by R.D. Silverman. To do one of those solo I'd likely need a lot of help/advice as far as parameter selection, because I'm not experienced enough to be convinced I have it right otherwise. That said, comparing to 2,1598M (GNFS) to 6^383+1 and 2,799+ (SNFS) to 3,512+, I think I could do either in a little over a month (but might need help with the matrix). Comments/advice? |
[QUOTE=fivemack;133039]I wouldn't be very optimistic that sieving harder numbers will give the matrix queue a chance to clear, ...[/QUOTE]
The matrix for 3,536+ will be done within a week (pending no problems; thanks for the reply on the msieve thread). So I want the next number to take more than a week to sieve. The number 10,257+ at difficulty 257 was within a few hours of completely finishing sieving in under seven days (although other users arrived, which added two days to the walltime). The matrix for 10,257- is smaller/easier than 3,536+ despite being a harder number (due to oversieving). I do seem to have disk space for four large matrices (barely) once 3,547- finishes sieving later today; so 536+ will make space for 2,949+ and 257- will make space for the one after that (both candidates having recently been c258's). Your reply gives a long-term analysis for what we hope is a short-term problem (pending my being able to ship a matrix or two off to Greg's new machine, so we can alternate matrices and double the available time for running harder numbers). We're mostly waiting for Greg's C260 to finish to see where things stand. -Bruce |
I've done some trial sieving on 2,799+ with various parameterizations
Rational side sieving of this polynomial [code] n: 17046484339439502390787014663843382603841990311536588019427381788315112645544913528357093997566704677217496355462170676223202258500459624221675999642483043420478565738287873667689291 skew: 1 c6: 2 c0: 1 Y1: -1 Y0: 10889035741470030830827987437816582766592 rlim: 85000000 alim: 75000000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 [/code] gave ~ 1.1 rels/q at 85M, and 0.6 rels/q at 185M, using gnfs-lasieve4I14e and test ranges of 1000 special-q. The area under the parallelogram is therefore about 85M rels (say 80M unique), which is in the vicinity of producing a matrix using the 0.8(pi(lpba) + pi(lpbr)) metric. Algebraic side tests yielded less relations, and took longer, so rational side seems to be the way to go. 31 bit tests seemed to show that as many or more special-q would be needed, with the accompanying headaches of shuffling around twice as much data, so 30 bits seems to be the way to go. I can probably sustain 3 to 5 million special-q per day, depending on cluster usage by others, so this would take anywhere from 20 to 40 days, depending on actual yield and resource availability. *deep-breath* I'll go ahead and reserve it. Other stuff in my queue will push the start out to next week sometime. Suggestions/tweaks to parameters welcome. - ben. |
[QUOTE=bsquared;133052]I've done some trial sieving on 2,799+ with various parameterizations
Rational side sieving of this polynomial [code] n: 17046484339439502390787014663843382603841990311536588019427381788315112645544913528357093997566704677217496355462170676223202258500459624221675999642483043420478565738287873667689291 skew: 1 c6: 2 c0: 1 Y1: -1 Y0: 10889035741470030830827987437816582766592 rlim: 85000000 alim: 75000000 lpbr: 30 lpba: 30 mfbr: 60 mfba: 60 rlambda: 2.6 alambda: 2.6 [/code] gave ~ 1.1 rels/q at 85M, and 0.6 rels/q at 185M, using gnfs-lasieve4I14e and test ranges of 1000 special-q. The area under the parallelogram is therefore about 85M rels (say 80M unique), which is in the vicinity of producing a matrix using the 0.8(pi(lpba) + pi(lpbr)) metric. Algebraic side tests yielded less relations, and took longer, so rational side seems to be the way to go. 31 bit tests seemed to show that as many or more special-q would be needed, with the accompanying headaches of shuffling around twice as much data, so 30 bits seems to be the way to go. I can probably sustain 3 to 5 million special-q per day, depending on cluster usage by others, so this would take anywhere from 20 to 40 days, depending on actual yield and resource availability. *deep-breath* I'll go ahead and reserve it. Other stuff in my queue will push the start out to next week sometime. Suggestions/tweaks to parameters welcome. - ben.[/QUOTE] Actually since the exponent is divisible by 17 we could, in theory save a factor of 2^47 (i.e. it becomes 752 bits), but we would have to work with a reciprocal octic polynomial....... Which is quite sub-optimal. But it would be an interesting experiment..... |
[QUOTE=bsquared;133052]gave ~ 1.1 rels/q at 85M, and 0.6 rels/q at 185M, using gnfs-lasieve4I14e and test ranges of 1000 special-q. The area under the parallelogram is therefore about 85M rels (say 80M unique), which is in the vicinity of producing a matrix using the 0.8(pi(lpba) + pi(lpbr)) metric.[/QUOTE]I agree that these parameters are reasonable and that you'll probably need about 76-80M unique relations.
I doubt, however, that the duplication rate will be 6%, the value of (85-80)/85, but is rather more likely to be 20-25%. At least, that's been my experience with numerous lattice sievers in the past. I would assume 90-100M raw relations will be required. You may be pleasantly surprised in the end but setting expectations to realistic values before a large computation is almost always a good thing, IMO. Paul |
[quote=R.D. Silverman;133059]Actually since the exponent is divisible by 17 we could, in theory save a
factor of 2^47 (i.e. it becomes 752 bits), but we would have to work with a reciprocal octic polynomial....... Which is quite sub-optimal. But it would be an interesting experiment.....[/quote] I had not considered octic polynomials... can msieve deal with them? Or the ggnfs postprocessing suite? I don't have any other tools at my disposal for postprocessing. |
[quote=bsquared;133065]I had not considered octic polynomials... can msieve deal with them? Or the ggnfs postprocessing suite? I don't have any other tools at my disposal for postprocessing.[/quote]
the ggnfs lattice siever didn't like the octic either... simply didn't recognize the c7 or c8 lines. [code] [SIZE=2]n: 17046484339439502390787014663843382603841990311536588019427381788315112645544913528357093997566704677217496355462170676223202258500459624221675999642483043420478565738287873667689291[/SIZE] [SIZE=2]skew: 1 [/SIZE] [SIZE=2]c8: 1[/SIZE] [SIZE=2]c7: -1[/SIZE] [SIZE=2]c6: -7[/SIZE] [SIZE=2]c5: 6[/SIZE] [SIZE=2]c4: 15[/SIZE] [SIZE=2]c3: -10 [/SIZE] [SIZE=2]c2: -10 [/SIZE] [SIZE=2]c1: 4[/SIZE] [SIZE=2]c0: 1[/SIZE] [SIZE=2]Y1: 140737488355328 [/SIZE] [SIZE=2]Y0: -19807040628566084398385987585[/SIZE] [SIZE=2]rlim: 85000000 [/SIZE] [SIZE=2]alim: 85000000 [/SIZE] [SIZE=2]lpbr: 30[/SIZE] [SIZE=2]lpba: 30 [/SIZE] [SIZE=2]mfbr: 60 [/SIZE] [SIZE=2]mfba: 60 [/SIZE] [SIZE=2]rlambda: 2.6 [/SIZE] [SIZE=2]alambda: 2.6[/SIZE] [/code] btw, I think the poly coefficients are correct... at least, they make the equation f(b+1/b)*b^8 - (b^17+1)/(b+1) = 0 when f(x) = c8*x^8 + ... + c1*x + c0 |
[QUOTE=bsquared;133065]I had not considered octic polynomials... can msieve deal with them? Or the ggnfs postprocessing suite? I don't have any other tools at my disposal for postprocessing.[/QUOTE]
Msieve can only deal with degree 6 or less; if you want to recompile the source, just increment MAX_POLY_DEGREE in gnfs/gnfs.h While this could make the library (and its line siever) work, it doesn't make degree 8 a good idea. |
[quote=R.D. Silverman;133059]
But it would be an interesting experiment.....[/quote] Does your lattice siever handle octic's? If so, I can try to get it built on my system and run a few sieve experiments. Let me know if it can and if you're willing and I can PM you contact info. |
[QUOTE=bsquared;133074]Does your lattice siever handle octic's? If so, I can try to get it built on my system and run a few sieve experiments. Let me know if it can and if you're willing and I can PM you contact info.[/QUOTE]
I have to change a defined parameter and recompile. However, my suggestion was a joke of sorts.... An octic will be quite slow; the norms will be very unbalanced. |
Yes, it will be a long time before octics make sense.
Pick a Q; the basis of the lattice will have entries around \sqrt Q; with gnfs-lasieve4I14e, the search region is I think 2^15 x 2^14, so A and B will be around 2^13 \sqrt Q. So the octic polynomial will have values around 2^104 Q^4 ( = (2^13 \sqrt Q)^8), the linear around 2^94 \sqrt Q. Lattice-sieve on the algebraic side, so you get a free factor Q; Q ~ 2^26 in this case, so algebraic things are ~182 bits and linear ~107 bits. Say large-prime bound is 2^31; yield estimate is (182/31)^(-182/31) * (107/31)^(-107/31) or 4e-7. Not quite as disastrous as I'd have feared. For a sextic, the polynomial values are around 2^78 Q^3, the linear around 2^134 \sqrt Q. Again Q ~ 2^26, algebraic things are around 130 bits after taking out the free factor Q, rational around 147 bits, yield estimate 1.53e-6, or 1.44e-6 if you sieve on the rational side. (that argument suggests the algebraic side is very slightly better, but it'll be lost in the noise. The problem with comparing rational and algebraic sides over short intervals is that the number of usable Qs can fluctuate quite a lot. In 85M..85M+1k, you have 63 valid rational-side Q and 44 valid algebraic-side Q, so you'd expect a larger yield from the rational side). [code] ? k=0;forprime(p=85e6,85e6+1000,k=k+1);print(k) 63 ? k=0;forprime(p=85e6,85e6+1000,k=k+length(polrootsmod(2*x^6+1,p)));print(k) 44 [/code] |
Once you've got a million relations, count (uniq -d or similar) how many duplicates you have; I know the coupon-collecting model is invalid because there are several different kinds of coupons, but if you believe the coupon-collectors then you'd expect 10,000 times as many duplicates from the full 10^8 relations as you find among the first million - so if you have much over 2500 duplicates from a million, you might want to use a larger search region.
|
1 Attachment(s)
This is great, I'm learning a lot already and I've hardly even started yet. Thanks Fivemack for the yield estimation example - I understand section 2 of Silverman's "Optimal parameterization of SNFS" a bit better now.
I've done some more yield estimates and I now think algebraic side sieving will be better, but the difference is hardly noticable. In either case, I think I'll need a range of 120M rather than 100M. 5^293 - 3^293 is in linear algebra now, so I've starting sieving for this project. I'll post updates here. |
you could try both algebraic and rational sieving
it worked quite well for me it didnt appear that there were many duplicates caused by that |
After sieving from 75M to 112M on the algebraic side, I've yielded ~32M relations with about 5.5% duplication (32.494M rels, 30.725 unique). So my initial optimistic projection is actually looking realistic.
I'll do at least a few million q of overlapping rational side sieving at some point, if for no other reason than that I'm curious about the duplication percentage. |
[QUOTE=bsquared;133425]After sieving from 75M to 112M on the algebraic side, I've yielded ~32M relations with about 5.5% duplication (32.494M rels, 30.725 unique). So my initial optimistic projection is actually looking realistic.
I'll do at least a few million q of overlapping rational side sieving at some point, if for no other reason than that I'm curious about the duplication percentage.[/QUOTE]Perhaps so. If so, I'll be somewhat surprised. Consider a birthday paradox model in which the number of dups grows as O(sqrt N). We'll doubtless find out in due course. Paul |
[quote=jasonp;133071]Msieve can only deal with degree 6 or less; if you want to recompile the source, just increment MAX_POLY_DEGREE in gnfs/gnfs.h
While this could make the library (and its line siever) work, it doesn't make degree 8 a good idea.[/quote] In polyutil.c, I see this code block [CODE] #if MAX_POLY_DEGREE > 6 #error "polynomial degree > 6 not supported" #endif [/CODE] so simply modifing the #define in gnfs.h produces the above error. I also note that in get_polyval(...), cases for poly degree > 6 do not exist. I'm pretty sure it would be non-trivial to add the necessary code to make the library work with degree > 6, and of course I am not trying to tell you to do so... just FYI. |
[QUOTE=bsquared;133452]In polyutil.c, I see this code block
[CODE] #if MAX_POLY_DEGREE > 6 #error "polynomial degree > 6 not supported" #endif [/CODE] so simply modifing the #define in gnfs.h produces the above error. I also note that in get_polyval(...), cases for poly degree > 6 do not exist. I'm pretty sure it would be non-trivial to add the necessary code to make the library work with degree > 6, and of course I am not trying to tell you to do so... just FYI.[/QUOTE] Both of those code blocks need each other; there is a similar check on the polynomial degree for the factor base generator. Both can get worked around with poly-degree-independent code, but I think it probably isn't worth the effort. |
[QUOTE=bsquared;133144]This is great, I'm learning a lot already and I've hardly even started yet. Thanks Fivemack for the yield estimation example - I understand section 2 of Silverman's "Optimal parameterization of SNFS" a bit better now.
.[/QUOTE] If I can help, let me know. |
1 Attachment(s)
[quote=R.D. Silverman;133497]If I can help, let me know.[/quote]
Thanks for the offer; when I get time to study it more I'm sure I'll have more questions. I now have a matrix for 2,799+ [code] [SIZE=2]Mon Jun 2 07:39:20 2008 Msieve v. 1.35[/SIZE] [SIZE=2]Mon Jun 2 07:39:20 2008 random seeds: ead23c09 94626658[/SIZE] [SIZE=2]Mon Jun 2 07:39:20 2008 factoring 3255376146246966985904726375847116859472462827235761161322580579841782609083671170883596966497892458918210289075062652493909664493453220811604505064862823148130649942409251 (172 digits)[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 no P-1/P+1/ECM available, skipping[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 commencing number field sieve (172-digit input)[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 R0: 10889035741470030830827987437816582766592[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 R1: -1[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A0: 1[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A1: 0[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A2: 0[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A3: 0[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A4: 0[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A5: 0[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 A6: 2[/SIZE] [SIZE=2]Mon Jun 2 07:39:22 2008 size score = 1.514165e-11, Murphy alpha = 1.888024, combined = 8.828752e-12[/SIZE] [SIZE=2]Mon Jun 2 07:43:06 2008 restarting with 107582692 relations[/SIZE] [SIZE=2]Mon Jun 2 07:43:06 2008 [/SIZE] [SIZE=2]Mon Jun 2 07:43:06 2008 commencing relation filtering[/SIZE] [SIZE=2]Mon Jun 2 07:43:06 2008 commencing duplicate removal, pass 1[/SIZE] [SIZE=2][snip relation errors...][/SIZE] [SIZE=2]Mon Jun 2 07:53:49 2008 found 26008003 hash collisions in 107582655 relations[/SIZE] [SIZE=2]Mon Jun 2 07:53:49 2008 commencing duplicate removal, pass 2[/SIZE] [SIZE=2]Mon Jun 2 07:57:00 2008 found 20521518 duplicates and 87061137 unique relations[/SIZE] [SIZE=2]Mon Jun 2 07:57:00 2008 memory use: 504.8 MB[/SIZE] [SIZE=2]Mon Jun 2 07:57:28 2008 ignoring smallest 6566461 rational and 6562181 algebraic ideals[/SIZE] [SIZE=2]Mon Jun 2 07:57:28 2008 filtering rational ideals above 114884608[/SIZE] [SIZE=2]Mon Jun 2 07:57:28 2008 filtering algebraic ideals above 114884608[/SIZE] [SIZE=2]Mon Jun 2 07:57:28 2008 need 19692963 more relations than ideals[/SIZE] [SIZE=2]Mon Jun 2 07:57:28 2008 commencing singleton removal, pass 1[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 0 large ideals: 4031778[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 1 large ideals: 17147467[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 2 large ideals: 30851846[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 3 large ideals: 26140046[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 4 large ideals: 8874002[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 5 large ideals: 15998[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 6 large ideals: 0[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 relations with 7+ large ideals: 0[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 87061137 relations and about 58343537 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:06:48 2008 commencing singleton removal, pass 2[/SIZE] [SIZE=2]Mon Jun 2 08:16:11 2008 found 15772549 singletons[/SIZE] [SIZE=2]Mon Jun 2 08:16:11 2008 current dataset: 71288588 relations and about 41341507 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:16:11 2008 commencing singleton removal, pass 3[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 0 large ideals: 4031778[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 1 large ideals: 15635317[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 2 large ideals: 25655444[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 3 large ideals: 19817913[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 4 large ideals: 6137011[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 5 large ideals: 11125[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 6 large ideals: 0[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 relations with 7+ large ideals: 0[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 71288588 relations and about 51811973 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:23:58 2008 commencing singleton removal, pass 4[/SIZE] [SIZE=2]Mon Jun 2 08:31:47 2008 found 12544616 singletons[/SIZE] [SIZE=2]Mon Jun 2 08:31:47 2008 current dataset: 58743972 relations and about 38324310 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:31:47 2008 commencing singleton removal, pass 5[/SIZE] [SIZE=2]Mon Jun 2 08:38:21 2008 found 3070095 singletons[/SIZE] [SIZE=2]Mon Jun 2 08:38:21 2008 current dataset: 55673877 relations and about 35186969 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:38:21 2008 commencing singleton removal, pass 6[/SIZE] [SIZE=2]Mon Jun 2 08:44:35 2008 found 716774 singletons[/SIZE] [SIZE=2]Mon Jun 2 08:44:35 2008 current dataset: 54957103 relations and about 34466237 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:44:35 2008 commencing singleton removal, pass 7[/SIZE] [SIZE=2]Mon Jun 2 08:50:46 2008 found 161724 singletons[/SIZE] [SIZE=2]Mon Jun 2 08:50:46 2008 current dataset: 54795379 relations and about 34304314 large ideals[/SIZE] [SIZE=2]Mon Jun 2 08:50:46 2008 commencing singleton removal, final pass[/SIZE] [SIZE=2]Mon Jun 2 08:58:35 2008 memory use: 832.7 MB[/SIZE] [SIZE=2]Mon Jun 2 08:58:35 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 08:58:40 2008 begin with 54795379 relations and 39362024 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 08:59:55 2008 reduce to 48077568 relations and 32487013 ideals in 15 passes[/SIZE] [SIZE=2]Mon Jun 2 08:59:55 2008 max relations containing the same ideal: 23[/SIZE] [SIZE=2]Mon Jun 2 09:00:03 2008 filtering rational ideals above 720000[/SIZE] [SIZE=2]Mon Jun 2 09:00:03 2008 filtering algebraic ideals above 720000[/SIZE] [SIZE=2]Mon Jun 2 09:00:03 2008 need 115920 more relations than ideals[/SIZE] [SIZE=2]Mon Jun 2 09:00:03 2008 commencing singleton removal, final pass[/SIZE] [SIZE=2]Mon Jun 2 09:19:09 2008 keeping 42489970 ideals with weight <= 20, new excess is 3124760[/SIZE] [SIZE=2]Mon Jun 2 09:20:17 2008 memory use: 1359.9 MB[/SIZE] [SIZE=2]Mon Jun 2 09:20:17 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 09:20:25 2008 begin with 48077568 relations and 42489970 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:21:30 2008 reduce to 48064735 relations and 42477136 ideals in 8 passes[/SIZE] [SIZE=2]Mon Jun 2 09:21:30 2008 max relations containing the same ideal: 20[/SIZE] [SIZE=2]Mon Jun 2 09:22:13 2008 removing 3526885 relations and 3126885 ideals in 400000 cliques[/SIZE] [SIZE=2]Mon Jun 2 09:22:18 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 09:22:25 2008 begin with 44537850 relations and 42477136 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:23:33 2008 reduce to 44380014 relations and 39190698 ideals in 9 passes[/SIZE] [SIZE=2]Mon Jun 2 09:23:33 2008 max relations containing the same ideal: 20[/SIZE] [SIZE=2]Mon Jun 2 09:24:09 2008 removing 2588253 relations and 2188253 ideals in 400000 cliques[/SIZE] [SIZE=2]Mon Jun 2 09:24:12 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 09:24:19 2008 begin with 41791761 relations and 39190698 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:25:08 2008 reduce to 41695291 relations and 36905078 ideals in 7 passes[/SIZE] [SIZE=2]Mon Jun 2 09:25:08 2008 max relations containing the same ideal: 20[/SIZE] [SIZE=2]Mon Jun 2 09:25:42 2008 removing 2284373 relations and 1884373 ideals in 400000 cliques[/SIZE] [SIZE=2]Mon Jun 2 09:25:45 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 09:25:51 2008 begin with 39410918 relations and 36905078 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:26:37 2008 reduce to 39329088 relations and 34938113 ideals in 7 passes[/SIZE] [SIZE=2]Mon Jun 2 09:26:37 2008 max relations containing the same ideal: 20[/SIZE] [SIZE=2]Mon Jun 2 09:27:09 2008 removing 2116309 relations and 1716309 ideals in 400000 cliques[/SIZE] [SIZE=2]Mon Jun 2 09:27:12 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 09:27:17 2008 begin with 37212779 relations and 34938113 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:28:07 2008 reduce to 37139328 relations and 33147748 ideals in 8 passes[/SIZE] [SIZE=2]Mon Jun 2 09:28:07 2008 max relations containing the same ideal: 20[/SIZE] [SIZE=2]Mon Jun 2 09:28:37 2008 removing 1853374 relations and 1486516 ideals in 366858 cliques[/SIZE] [SIZE=2]Mon Jun 2 09:28:39 2008 commencing in-memory singleton removal[/SIZE] [SIZE=2]Mon Jun 2 09:28:44 2008 begin with 35285954 relations and 33147748 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:29:24 2008 reduce to 35225617 relations and 31600416 ideals in 7 passes[/SIZE] [SIZE=2]Mon Jun 2 09:29:24 2008 max relations containing the same ideal: 20[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 0 large ideals: 60175[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 1 large ideals: 445027[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 2 large ideals: 2605439[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 3 large ideals: 6976222[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 4 large ideals: 10246869[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 5 large ideals: 8796796[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 6 large ideals: 4466548[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 relations with 7+ large ideals: 1628541[/SIZE] [SIZE=2]Mon Jun 2 09:30:08 2008 commencing 2-way merge[/SIZE] [SIZE=2]Mon Jun 2 09:30:48 2008 reduce to 23314614 relation sets and 19689413 unique ideals[/SIZE] [SIZE=2]Mon Jun 2 09:30:48 2008 commencing full merge[/SIZE] [SIZE=2]Mon Jun 2 09:37:59 2008 memory use: 2150.4 MB[/SIZE] [SIZE=2]Mon Jun 2 09:38:00 2008 found 12638970 cycles, need 12139613[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 weight of 12139613 cycles is about 789245012 (65.01/cycle)[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 distribution of cycle lengths:[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 1 relations: 1665988[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 2 relations: 1771402[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 3 relations: 1671409[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 4 relations: 1465681[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 5 relations: 1263365[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 6 relations: 1046885[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 7 relations: 876090[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 8 relations: 712269[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 9 relations: 567105[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 10+ relations: 1099419[/SIZE] [SIZE=2]Mon Jun 2 09:38:12 2008 heaviest cycle: 14 relations[/SIZE] [SIZE=2]Mon Jun 2 09:38:17 2008 commencing cycle optimization[/SIZE] [SIZE=2]Mon Jun 2 09:38:47 2008 start with 57686330 relations[/SIZE] [SIZE=2]Mon Jun 2 09:44:32 2008 pruned 613060 relations[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 memory use: 1982.1 MB[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 distribution of cycle lengths:[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 1 relations: 1665988[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 2 relations: 1782867[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 3 relations: 1697219[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 4 relations: 1480328[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 5 relations: 1280185[/SIZE] [SIZE=2]Mon Jun 2 09:44:33 2008 6 relations: 1054393[/SIZE] [SIZE=2]Mon Jun 2 09:44:34 2008 7 relations: 881788[/SIZE] [SIZE=2]Mon Jun 2 09:44:34 2008 8 relations: 709830[/SIZE] [SIZE=2]Mon Jun 2 09:44:34 2008 9 relations: 562365[/SIZE] [SIZE=2]Mon Jun 2 09:44:34 2008 10+ relations: 1024650[/SIZE] [SIZE=2]Mon Jun 2 09:44:34 2008 heaviest cycle: 14 relations[/SIZE] [SIZE=2]Mon Jun 2 09:45:05 2008 elapsed time 02:05:45[/SIZE] [SIZE=2]Mon Jun 2 10:27:11 2008 [/SIZE] [SIZE=2]Mon Jun 2 10:27:11 2008 [/SIZE] [SIZE=2]Mon Jun 2 10:27:11 2008 Msieve v. 1.35[/SIZE] [SIZE=2]Mon Jun 2 10:27:11 2008 random seeds: 1da8d383 c826f423[/SIZE] [SIZE=2]Mon Jun 2 10:27:11 2008 factoring 3255376146246966985904726375847116859472462827235761161322580579841782609083671170883596966497892458918210289075062652493909664493453220811604505064862823148130649942409251 (172 digits)[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 no P-1/P+1/ECM available, skipping[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 commencing number field sieve (172-digit input)[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 R0: 10889035741470030830827987437816582766592[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 R1: -1[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A0: 1[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A1: 0[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A2: 0[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A3: 0[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A4: 0[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A5: 0[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 A6: 2[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 size score = 1.514165e-11, Murphy alpha = 1.888024, combined = 8.828752e-12[/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 [/SIZE] [SIZE=2]Mon Jun 2 10:27:12 2008 commencing linear algebra[/SIZE] [SIZE=2]Mon Jun 2 10:27:16 2008 read 12139613 cycles[/SIZE] [SIZE=2]Mon Jun 2 10:29:44 2008 cycles contain 33133392 unique relations[/SIZE] [SIZE=2]Mon Jun 2 10:33:21 2008 read 33133392 relations[/SIZE] [SIZE=2]Mon Jun 2 10:34:37 2008 using 32 quadratic characters above 1073741468[/SIZE] [SIZE=2]Mon Jun 2 10:39:17 2008 building initial matrix[/SIZE] [SIZE=2]Mon Jun 2 10:52:39 2008 memory use: 4175.5 MB[/SIZE] [SIZE=2]Mon Jun 2 10:52:46 2008 read 12139613 cycles[/SIZE] [SIZE=2]Mon Jun 2 10:53:43 2008 matrix is 12139298 x 12139613 (3443.4 MB) with weight 1077440856 (88.75/col)[/SIZE] [SIZE=2]Mon Jun 2 10:53:43 2008 sparse part has weight 769138279 (63.36/col)[/SIZE] [SIZE=2]Mon Jun 2 10:58:24 2008 filtering completed in 3 passes[/SIZE] [SIZE=2]Mon Jun 2 10:58:28 2008 matrix is 12110668 x 12110868 (3440.3 MB) with weight 1076210645 (88.86/col)[/SIZE] [SIZE=2]Mon Jun 2 10:58:28 2008 sparse part has weight 768639453 (63.47/col)[/SIZE] [SIZE=2]Mon Jun 2 11:02:39 2008 read 12110868 cycles[/SIZE] [SIZE=2]Mon Jun 2 11:03:36 2008 matrix is 12110668 x 12110868 (3440.3 MB) with weight 1076210645 (88.86/col)[/SIZE] [SIZE=2]Mon Jun 2 11:03:36 2008 sparse part has weight 768639453 (63.47/col)[/SIZE] [SIZE=2]Mon Jun 2 11:03:37 2008 saving the first 48 matrix rows for later[/SIZE] [SIZE=2]Mon Jun 2 11:03:43 2008 matrix is 12110620 x 12110868 (3303.9 MB) with weight 818026074 (67.54/col)[/SIZE] [SIZE=2]Mon Jun 2 11:03:43 2008 sparse part has weight 744977285 (61.51/col)[/SIZE] [SIZE=2]Mon Jun 2 11:03:43 2008 matrix includes 64 packed rows[/SIZE] [SIZE=2]Mon Jun 2 11:03:43 2008 using block size 65536 for processor cache size 4096 kB[/SIZE] [SIZE=2]Mon Jun 2 11:06:08 2008 commencing Lanczos iteration (2 threads)[/SIZE] [SIZE=2]Mon Jun 2 11:06:08 2008 memory use: 3536.5 MB[/SIZE] [/code] which is bigger then what I was hoping to work with and definately could benefit from more sieving, but my sieving resources will be tied up on 2,1598L, so it'll have to do. ETA on 2 cores of a Quad Core X5365 is about 6/24/08. [SIZE=2][SIZE=2]- ben.[/SIZE] p.s. @ Xilman: duplication of ~ 19%, so right you are. p.p.s actual yield chart attached[/SIZE] |
[QUOTE=bsquared;135011]
p.s. @ Xilman: duplication of ~ 19%, so right you are.[/QUOTE] :smile: :geek: Paul |
2,1586L
I've reserved the "neglected" 2,1586L with Sam.
Looking now for a good poly above 2e-12... Serge |
2,1586L finished by gnfs
It is now done.
c156 = 5918377526193953345654158109965504073069441603477383699602208521751099769 (p73). p83 Thinking about the number cracked by Aoki. GNFS-165, cool size... Ah, but a man's grasp should exceed... :smile: whatever... -Serge |
[QUOTE=Batalov;141012]It is now done.
c156 = 5918377526193953345654158109965504073069441603477383699602208521751099769 (p73). p83 Thinking about the number cracked by Aoki. GNFS-165, cool size... Ah, but a man's grasp should exceed... :smile: whatever... -Serge[/QUOTE]Good! Paul |
[QUOTE=Batalov;141012]It is now done.
c156 = 5918377526193953345654158109965504073069441603477383699602208521751099769 (p73). p83 Thinking about the number cracked by Aoki. GNFS-165, cool size... Ah, but a man's grasp should exceed... :smile: whatever... -Serge[/QUOTE] GNFS-165 is a nice kind of size, the matrix ought to fit in 4GB and a sieving time of a couple of CPU-years fits a number of combinations of patience and CPU-availability. Go for it; I only wish the GNFS-record-table kept by Sam Wagstaff had a few more entries on it so you could stay on it longer. |
[QUOTE=fivemack;141033]GNFS-165 is a nice kind of size, the matrix ought to fit in 4GB and a sieving time of a couple of CPU-years fits a number of combinations of patience and CPU-availability. Go for it; I only wish the GNFS-record-table kept by Sam Wagstaff had a few more entries on it so you could stay on it longer.[/QUOTE]
Paul (xilman) just finished the LA for 2,1538M C208 = p80.p128 [thanks!] p80 = 43125229216737082272194634000839865996208236725833746045943422097093188432757397 3,517+ will finish Friday evening. 10,247+ is about 40% sieved. |
| All times are UTC. The time now is 08:05. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.