![]() |
Status
What's the status of 5,317-, 6,283-, 5,323- etc? I understand that
they finished sieving some time ago. |
Paul has been processing 6,283-.
On Sept 5, he reported that he had produced a 4.75M matrix and that it would require about 12 more days to run the Block Lanzcos phase on his machine. So I expect to hear something from him soon. Subsequently, I began processing 5,317-. I managed to produce a 4.38M matrix but my 32 bit processing is much slower than Paul's 64 bit implementation. I'd done about 20%, but do not expect to finish before late October. 5,323- is our current sieving. We have done about 40% and should complete that sieving mid-October if we are able to continue at the current rate. |
[QUOTE=Wacky;114470]Paul has been processing 6,283-.
On Sept 5, he reported that he had produced a 4.75M matrix and that it would require about 12 more days to run the Block Lanzcos phase on his machine. So I expect to hear something from him soon.[/QUOTE]I'm typing on a laptop in a hotel room just off Harvard Square right now and accessing my home systems is possible but not entirely simple. When I last checked, some hours ago, the matrix still had a few hours to run. Therefore, it should be close to finished now. If I can complete the factorization from here I will. Otherwise, it will have to wait until I return home (doubtless jetlagged) on Thursday. Paul P.S. Actually, it is impossible right now. It appears that my home ADSL link is down. |
[QUOTE=xilman;114517]I'm typing on a laptop in a hotel room just off Harvard Square right now and accessing my home systems is possible but not entirely simple. When I last checked, some hours ago, the matrix still had a few hours to run. Therefore, it should be close to finished now.
If I can complete the factorization from here I will. Otherwise, it will have to wait until I return home (doubtless jetlagged) on Thursday. Paul P.S. Actually, it is impossible right now. It appears that my home ADSL link is down.[/QUOTE]The linear algebra had finished by this morning (Boston time) and so I kicked off the square root stage. The latter is expected to take an hour or more so I left it running. After the day's business here I tried to pick up the factors but the ADSL was down again. With a bit of luck I may be able to get the factors tomorrow. Paul |
Last night, Paul reported to me that he had returned to England and retrieved the following information before going to bed after an exhausting trip.
[QUOTE]6,283- Probable prime factor 1 has 99 digits: 447124831877025366689793129436873423163216894462385022533572097231958503707190221729170132612703957 Probable prime factor 2 has 75 digits: 138457361320915478919381975760508114488979126852819238404548238145324558533 so not an ECM miss, even by Bruce's standards. [/QUOTE] I'm sure that it is likely that he will add some more details after he recovers. On 5,317-, I have completed 30% of the Block Lanczos iterations. And we have completed approximately 50% of the sieving for 5,323-. |
Is the actual NFSNet project website going to be updated soon with all this new information about what has been completed since April? That was the date of the last bit of news on the website. I understand if the person who does that needs to recover from his travelling, but as I help keep one of the biggest DC project websites updated, it would be nice to make some official news about it from the website, no hurry though.
|
[QUOTE=Jwb52z;114870]Is the actual NFSNet project website going to be updated soon ... I understand if the person who does that needs to recover from his travelling, ...[/QUOTE]
Xilman does post-processing after sieving, and sets sieving regions for the new numbers; not web page maintenance. The person that used to do the web page and stats retired from NFSNET, and hasn't been replaced. Fivemack/Tom did some interim updates, perhaps he could do another round? -bdodson |
Thank you so much for the quick answer to my question. There are a great many people who visit the DC site I help with and I'm sure they are interested in what's going on with things. :)
|
5,317- Factored
I am pleased to report that, with the assistance of "frmky" Greg Childers (CalState-Fullerton) and utilizing code from msieve ("jasonp" Jason Papadopoulos), NFSNet has completed the factorization of 5,317-.
Greg reported [QUOTE]Fri Sep 28 12:06:02 2007 prp85 factor: 1173266048118996938584719882501239841331337879112270918586790280760729499132694039331 Fri Sep 28 12:06:02 2007 prp110 factor: 78784317656768133239109671345422644991678073397834197717116145126532590520938482143011654153492533979880370291 Greg[/QUOTE] Thanks to all who helped in this effort. I'm sure that Greg will have something to add about his part of the effort. In the meantime, we continue to sieve for 5,323-, having reached about 2/3 of the estimated interval. With both Paul and Greg capable of doing the post processing in a reasonable timeframe, we can use some additional help in the sieving. Please join us. On the behalf of the entire NFSNet Factoring Group, Richard "Wacky" Wackerbarth |
Just a few details... As far as I am aware, this the largest SNFS factorization to date completed with Jason's msieve. Approximately 52 million relations were converted from CWI to GGNFS format, then fed to msieve running on a 1.8 GHz single-core Opteron 144 system with 3 GB of memory in 64-bit Linux.
The filtering took about 4 hours to complete, and used 2.6 GB of memory. It produced a final 4945096 x 4945296 matrix with weight 430746398. After setting aside the dense rows due to the quadratic characters and small primes, Block Lanczos was started on a 4945048 x 4945296 matrix with weight 325847903 (a bit less than 66 nonzero entries per column). Although the calendar time was a bit longer due to concurrently running processes, the runtime was just over 6 days, and used 1.4 GB of memory. Each square root run took about 2 1/4 hours and used about 1 GB of memory. The factors were found on the 5th dependency. Greg |
[QUOTE=frmky;115323]Just a few details... As far as I am aware, this the largest SNFS factorization to date completed with Jason's msieve. [/QUOTE]
I must say, this announcement has just made my evening. Richard, is there any information about the matrix generated by the CWI suite, and how long the linear algebra would have taken? |
The CWI suite produced a slightly smaller but denser matrix:
[QUOTE]Matrix has 4368656 rows and 4386633 columns. Remaining matrix weight is 326654668, density 0.0017%. Average prime weight 74.77, average relation-set weight 74.47. [/QUOTE] The BL was in progress, but I believe was expected to take about 6 weeks on Richard's 32-bit machine. Greg |
[QUOTE=jasonp;115339]I must say, this announcement has just made my evening. ...[/QUOTE]
Mine too! Even with the benefit of a heads-up from Richard; a real solid improvement in the matrix step (and filtering, even). Congratulations Greg and Jason. With the backlog cleared, we ought to be able to go on to finish the last of Bob's 768-bit list. Although, if I read the replies to King's poll correctly, Bob himself was voting in favor of harder Most Wanted base-2's. I gather that there was an intended follow-up poll; and append the candidates below. I'd like to see them all done; any order is good. More sieving contributors for NFSNET would move things along more quickly. Of course (as Greg has pointed out), someone to fill the missing Stat's position would also help. For that matter, no reason for NFSNET to monopolize these candidates, if someone else is interested in steping up ... -Bruce [QUOTE] You could start a new ballot ... Keep the options as 7,263- 6,284+ 2,787- 2,776+ 7,268+ [/QUOTE] Rest of the 768-bit list is 6,292+ 7,269- and 7,271- presumably for after the above ones from bases 6 and 7. The other Most wanted base-2's seem to be 2,779+ and 2,787+ with 2,776+ (above) More wanted. |
We have a winner!
[QUOTE=bdodson;115356] ...I gather that there was an intended
follow-up poll; and append the candidates [above] ... Rest of the 768-bit list is 6,292+ 7,269- and 7,271- presumably for after the above ones from bases 6 and 7. The other Most wanted base-2's seem to be 2,779+ and 2,787+ with 2,776+ (above) More wanted.[/QUOTE] That was quick; 6, 284+ wins. (Unless ECMNET gets a factor within the week.) Suppose 6,292+ would serve as a replacement for a poll. If that one wins next, the 768-bit list would be down to four last base-7's. -bd |
[QUOTE=bdodson;115356]Mine too! Even with the benefit of a heads-up from Richard; a
real solid improvement in the matrix step (and filtering, even). Congratulations Greg and Jason. With the backlog cleared, we ought to be able to go on to finish the last of Bob's 768-bit list. Although, if I read the replies to King's poll correctly, Bob himself was voting in favor of harder Most Wanted base-2's. [/QUOTE] The only reason for my preference was that I had promised Dick Lehmer that I would push to finish the base 2 tables that were incomplete from the 1st edition of the book. They have been around for a long time; the others are relative newcomers. Some time ago, NFSNET had asked for suggestions for some "easier" numbers. I had suggested the 768-bit list as an alternative to the harder base 2 numbers. In fact, if NFSNET wants to make a effort to finish the base 2 tables through 800 bits, I will put aside my work and help with the sieving. My siever is a good deal faster than the one used by NFSNET. There are two numbers left from 2- (one is supposedly "in progress", but I won't hold my breath), Three from the 2+, two from the 2,4K+ and 6 from the 2LM table (one of which will finish in about 2.5 days). |
[QUOTE=R.D. Silverman;115437]
There are two numbers left from 2- (one is supposedly "in progress", but I won't hold my breath), Three from the 2+, two from the 2,4K+ and 6 from the 2LM table (one of which will finish in about 2.5 days).[/QUOTE] Substantial progress since the last time we looked; finishing 2,1582L c162 will move 2,1598M C160 up into a fifth hole; with the 12 remaining all readily visible on Sam's page. Base-2's are fine with me; the only concern being that if there aren't enough sievers we'd drift up towards six months of sieving, which is a long time to wait for someone just considering joining. -bd |
[QUOTE=bdodson;115457]Substantial progress since the last time we looked; finishing 2,1582L c162
will move 2,1598M C160 up into a fifth hole; with the 12 remaining all readily visible on Sam's page. Base-2's are fine with me; the only concern being that if there aren't enough sievers we'd drift up towards six months of sieving, which is a long time to wait for someone just considering joining. -bd[/QUOTE] Here is 2,1582L. 2,1962M is filtering. 2,1630M is sieving. 2,1630M requires a quartic. It will be quite slow. 2,1582L c162 = p70.p92 p70 = 4785290367491952770979444950472742768748481440405231269246278905154317 p92 = 94732691570793956856759198414911779734119524415635396799864941098330965560269355785101434237 |
How do you use the additional factor p of an Aurifeuillian factor?
Taking out a factor p from x^p+1 involves the factorisation x^p+1 = (x+1)(x^{p-1}-x^{p+2}...\pm 1); the Aurifeuillian factors are from 4x^4+1=(2x^2+1)^2-(2x)^2 = (2x^2-2x+1) (2x^2+2x+1); but I don't see how those forms fit together so you can do both. What was the polynomial for 2,1582L or 2,1962M? |
Ah, I've figured this out.
Say x=28M+14. factor(2^14*x^28+1) is [code] [2*x^2 - 2*x + 1 1] [2*x^2 + 2*x + 1 1] [64*x^12 - 64*x^11 + 32*x^10 - 16*x^8 + 16*x^7 - 8*x^6 + 8*x^5 - 4*x^4 + 2*x^2 - 2*x + 1 1] [64*x^12 + 64*x^11 + 32*x^10 - 16*x^8 - 16*x^7 - 8*x^6 - 8*x^5 - 4*x^4 + 2*x^2 + 2*x + 1 1] [/code] now put u=2x+1/x; x^6*(u^6+2*u^5-10*u^4-20*u^3+16*u^2+32*u+8) is one factor and x^6*(u^6-2*u^5-10*u^4+20*u^3+16*u^2-32*u+8) the other. So it was just a matter of picking the right substitution, as I suppose SNFS polynomial generation always is. 12k+6 gives you a quartic [4 -4 2 -2 1] natively, or you can do the substitution to turn it into a quadratic and then change X and scale to get a sextic (which must be better than a quartic at >180 digits); 20k+10 gives you an octic which turns into a quartic and is annoying at >180 digits. Have I missed anything out? |
[QUOTE=fivemack;115756]How do you use the additional factor p of an Aurifeuillian factor?
Taking out a factor p from x^p+1 involves the factorisation x^p+1 = (x+1)(x^{p-1}-x^{p+2}...\pm 1); the Aurifeuillian factors are from 4x^4+1=(2x^2+1)^2-(2x)^2 = (2x^2-2x+1) (2x^2+2x+1); but I don't see how those forms fit together so you can do both. What was the polynomial for 2,1582L or 2,1962M?[/QUOTE] For 1582L: x^6 + 2x^5 - 10x^4 - 20x^3 + 16x^2 + 32x + 8 For 1962M: x^6 - 12x^4 + 4x^3 + 36x^2 - 24x - 8 |
[QUOTE=fivemack;115758]Ah, I've figured this out.
Say x=28M+14. factor(2^14*x^28+1) is [code] [2*x^2 - 2*x + 1 1] [2*x^2 + 2*x + 1 1] [64*x^12 - 64*x^11 + 32*x^10 - 16*x^8 + 16*x^7 - 8*x^6 + 8*x^5 - 4*x^4 + 2*x^2 - 2*x + 1 1] [64*x^12 + 64*x^11 + 32*x^10 - 16*x^8 - 16*x^7 - 8*x^6 - 8*x^5 - 4*x^4 + 2*x^2 + 2*x + 1 1] [/code] now put u=2x+1/x; x^6*(u^6+2*u^5-10*u^4-20*u^3+16*u^2+32*u+8) is one factor and x^6*(u^6-2*u^5-10*u^4+20*u^3+16*u^2-32*u+8) the other. So it was just a matter of picking the right substitution, as I suppose SNFS polynomial generation always is. 12k+6 gives you a quartic [4 -4 2 -2 1] natively, or you can do the substitution to turn it into a quadratic and then change X and scale to get a sextic (which must be better than a quartic at >180 digits); 20k+10 gives you an octic which turns into a quartic and is annoying at >180 digits. Have I missed anything out?[/QUOTE] You got it. |
[QUOTE=fivemack;115758]Ah, I've figured this out.
Say x=28M+14. factor(2^14*x^28+1) is [code] [2*x^2 - 2*x + 1 1] [2*x^2 + 2*x + 1 1] [64*x^12 - 64*x^11 + 32*x^10 - 16*x^8 + 16*x^7 - 8*x^6 + 8*x^5 - 4*x^4 + 2*x^2 - 2*x + 1 1] [64*x^12 + 64*x^11 + 32*x^10 - 16*x^8 - 16*x^7 - 8*x^6 - 8*x^5 - 4*x^4 + 2*x^2 + 2*x + 1 1] [/code] now put u=2x+1/x; x^6*(u^6+2*u^5-10*u^4-20*u^3+16*u^2+32*u+8) is one factor and x^6*(u^6-2*u^5-10*u^4+20*u^3+16*u^2-32*u+8) the other. So it was just a matter of picking the right substitution, as I suppose SNFS polynomial generation always is. 12k+6 gives you a quartic [4 -4 2 -2 1] natively, or you can do the substitution to turn it into a quadratic and then change X and scale to get a sextic (which must be better than a quartic at >180 digits); 20k+10 gives you an octic which turns into a quartic and is annoying at >180 digits. Have I missed anything out?[/QUOTE] I may have missed something. For example, for 2,1914M we get a quartic: [4,4,2,2,1] with root 2^159 . This becomes a quadratic x^2 + 2x - 2 with root 2^160 + 2^-159. To turn this into a sextic we make the substitution z^6 = x^2 giving z^6 + 2z^3 - 2, but the root is now z=(2^160 + 2^-159)^1/3. Computing a cube root mod N is as hard as factoring N itself. How do you suggest computing this cube root? |
Good point about the cube root; I hadn't thought what was happening on the linear side, and just remembered that the quadratic for x^3-1 can be turned trivially into a sextic. Sorry to have raised your hopes about 2,1914M.
Going through, 2,1962M is actually managing to use the factor nine by working with the factorisation of 2^18*x^36+1 ... I didn't expect that to be possible. Cool. [in the past you've occasionally posted things here suggesting that you don't have access to computational algebra; I'm doing all of this with pari/gp, which is conveniently free software, though I'm sure you've got hold of that yourself] |
[QUOTE=bdodson;115372]That was quick; 6, 284+ wins. (Unless ECMNET gets
a factor within the week.) Suppose 6,292+ would serve as a replacement for a poll. If that one wins next, the 768-bit list would be down to four last base-7's. -bd[/QUOTE] No; no factor in my 3rd & last t50 on 6,284+. But 6,292+ isn't going to do as a replacement: p60 = 151634244917416206035101114864937647283016448179107389644473 with prime cofactor. One more number to go to finish the 3rd t50 on the last of the c190-c233's in difficulty 220-229. This one was More wanted, 6th on the top10. -Bruce |
2,1962M
Here is 2,1962M C173 = p54.p119
p54 = 561070572288256277136602810062157316007570131157641589 p119 = 52548716528304902570734222019216090488579184876231505008640646786326028262229620519239651894875787945135414973991400093 2,1630M is in progress. It will take a while since a quartic is sub-optimal. |
[QUOTE=bdodson;115457]Substantial progress since the last time we looked; finishing 2,1582L c162
will move 2,1598M C160 up into a fifth hole; with the 12 remaining all readily visible on Sam's page. Base-2's are fine with me; the only concern being that if there aren't enough sievers we'd drift up towards six months of sieving, which is a long time to wait for someone just considering joining. -bd[/QUOTE] Kleinjung finished 2,799- C188 = p56.p133 |
This may be an unnecessarily contentious post, but do you consider Kleinjung's result an ECM miss? I think it's marginal; a curve at the 55-digit level takes about 30 minutes on hardware on which I'd expect 240-digit SNFS to take around 20,000 hours, and 40,000 curves would probably have picked up a p56, but I'm not sure that ECM on that number is the first use to which I'd have put 20,000 CPU-hours.
|
[QUOTE=fivemack;116163]This may be an unnecessarily contentious post, but do you consider Kleinjung's result an ECM miss? I think it's marginal; a curve at the 55-digit level takes about 30 minutes on hardware on which I'd expect 240-digit SNFS to take around 20,000 hours, and 40,000 curves would probably have picked up a p56, but I'm not sure that ECM on that number is the first use to which I'd have put 20,000 CPU-hours.[/QUOTE]
It seems to me that, by what I understand as the conventional use, what you're discussing here is a hypothetical. To be an ecm miss, where ecm didn't do what we expected, you'd have had to actually run the 20,000 cpu hours. Optimal use of computing resources for ecm also has a built-in failure rate. If 40,000 curves with B1=110M is an optimal test for a known p55, we're supposed to stop at probabilty 1-1/e of finding the factor, allowing 1/e (a bit over 30%) of a chance of not getting that specific p55; re-estimating the next most likely factor size, presumably p60, and switch B1 to look for p60's. So if there were 10 sieving candidates with a p55, we're supposed to find 7 of them, and leave the other 3. So at/near the bleeding edge of performance ecm, no single prime factor found by sieving instead of ecm is ecm's fault. So as I understand the issue, the curves have to have actually be run, and for a single instance to qualify as an ecm miss, the factor should be notably below the level to which ecm was run. In this case, Kleinjung's reservation was way back in late June (it's on the July 1 "who's doing what"), so there were only 2*t50 bdodson curves run; perhaps a somewhat larger (2+epsilon)*t50 since this was a base-2 number. For me to say that ecm (rather than it's operators, deciding what numbers to feed into ecm) had missed a specific factor of a number run to 2*t50, I'd be thinking something like p47-p48. Peter has a term of "removing" an ecm factor size, rather than "finding", for which one runs twice the number of curves "expected"; lowering the probablilty of leaving a factor of that size to 1/(e^2). So if you were having hesitations about the 20,000 cpuhours, I'm expecting that if it were a question of 40,000 that you'd much rather have spent the time sieving, for which we'd be making certain progress towards the factorization. Taking the two recent small factors together, Bob's p54 and Thorsten's p56, they seem entirely consistent with the Silverman-Wagstaff analysis -- if an ecm t50 has failed to find a factor, the next most likely factor size to look for is p55. And we're still a long way from being willing to run t55's on numbers of small snfs difficulty. Actually, I find these factor sizes somewhat encouraging: if/when almost all of the gnfs/snfs smallest factor sizes are above p80, ecm will no longer be an attractive method. -Bruce PS - In the pdf JasonP points to on the kilobit snfs, the authors are grumbling that if they'd known that there was a p80 they might have run some more ecm. Sounds like we're within a generation or two of the first p80 referred to as an ecm miss! (That's cpu/memory generations; sooner than one might expect.) |
[QUOTE=bdodson;116170]
<snip> And we're still a long way from being willing to run t55's on numbers of small snfs difficulty. Actually, I find these factor sizes somewhat encouraging: if/when almost all of the gnfs/snfs smallest factor sizes are above p80, ecm will no longer be an attractive method. [/QUOTE] I would not call M799 "small snfs difficulty". Otherwise, we are in total agreement. BTW, I don't think the p56 is even close to being an ECM miss. The p51 from 11,251+ might be. |
[QUOTE=R.D. Silverman;116197]I would not call M799 "small snfs difficulty". Otherwise, we are in total agreement.
... The p51 from 11,251+ might be.[/QUOTE] Thanks for the wake-up call! On M799, difficulty 240.52, I have these in difficulty 230-249. Most of the grid cpu's are in 250-361 both the larger memory P4s (B1=110M) and the core2s (B1=260M), split c211-c233 and c190-c210. The Opterons just finished a 3rd t50 on c190-c233 with difficulty 220-229.99; and are starting in on 230-239.99. The new dual xeon quads are warming up on 240-249.99, also c190-c233. So far and away, most of my curves are going on numbers with difficulty above 241! I've been referring to c147-c154's as "soon to be smaller-but-needed" for a year-or-so already; but those are shrinking steadily, leaving c155-c169, and even c170-c189 as "smaller". After effects, perhaps, of my extended run in c251-c365. If we finish the ones with (snfs) difficulty below 220 for which there's a quintic or sextic, these new-smaller c155-c179's will shrink towards degree 4's and/or gnfs's. -Bruce :rolleyes: |
Smokin' !!
[QUOTE=R.D. Silverman;115437]... I had promised Dick Lehmer that I would push to finish the base 2 tables ...
Some time ago, NFSNET had asked for suggestions for some "easier" numbers. I had suggested the 768-bit list as an alternative to the harder base 2 numbers. In fact, if NFSNET wants to make a effort to finish the base 2 tables through 800 bits, ...[/QUOTE] Looks like you're getting your wish, nfsnet seems to have completed their run on the 768-bit list with 6,284+. Perhaps someone else will pick up the remaining ones (the last one has finished 3*t50). As I recall the NFSNET charter, the objective isn't so much cleaning up the numbers within a comfortable range, but to push on to larger benchmarks. So as Xilman observes, 2,779+.C212 is difficulty 235, the largest we've done in a while (Lehigh seems to have been the last to switch); and the winner of the next number "vote" was 10,239-.C228, difficulty 239. Looks like Thorsten was headed in the right direction, with difficulties in the 240's. -Bruce |
This evening, Greg reported to me:
[QUOTE] 5,323- finished successfully. The factors are prp54 factor: 824025642333621472612253607491152025643258690550015151 prp61 factor: 4520075300365525822415973296109200878340148487916084028121991 prp72 factor: 132981150324062454692451481044833258173562011479994362058454095433879531 [/QUOTE] This factoring utilized a combination of the CWI suite and post processing from msieve. 87.9M unique relations were collected by line sieving. I then processed the data removing the singletons and cliques to the point that there were 3.4M excess for ideals > 10M. Those remaining 26M relations were sent to California where Greg used msieve to further reduce the data to a 6.4M matrix. The Block Lanczos phase ran from Thu Oct 25 14:33:54 2007 to Tue Oct 30 03:36:20 2007. We would like to thank Greg's colleague who gave up his machine not only for the weekend, but also all day Friday and Monday to run the matrix solution. We continue sieving for 2,779+.C212 and should switch to 10,239-.C228 early next month. |
[QUOTE=Wacky;117421]We continue sieving for 2,779+.C212 and should switch to 10,239-.C228 early next month.[/QUOTE]
5,323- was selected at the last moment; in fact, the project file is in the same email as confirmation of the selection. So no extra ecm. I see a report of 2*t50, and the selection was before Bob's "(near) miss" of a p53, which was when I started que-ing 3rd t50's. I did a bit better with 6, 284+ with a last minute 3rd t50 (thanks to an early "whos doing what?" from Sam, which had the nfsnet reservation). But 5,323- was earlier, and a 2*t50 effort is less than half of what's needed for the p54; ecm didn't get a fair shot. The current 779+ did get a 3rd t50; and the base-10 next number got 4*t50. With current resources we wouldn't drop back to difficulty below 230. Seems like M787 would be about the best we could do in the mid-230 range, at the top of the most wanted list. We could apply the same parameters to pick up 2,787+ at the same difficulty. Or is there something in difficulty 240-249.99 that would be a better, more difficult challenge? Setting the number early would give me a better chance to make sure that the 3rd t50's been done, and get a better chance at any p54-p69's by continuing on toward t55. I can try guessing a likely range or ranges, but a definite early selection would be best. -Bruce |
This morning, Greg reported:
2,779+ is done just in time for Thanksgiving. The factors are prp86 factor: 17315878129048863927974905480696448369723747093035498799994851681384411684778961025249 prp127 factor: 1241587275642193613677401209382009830084399769371108904801198294935706207364264832500354031378698910359793960404372927442514937 Here are some of the interesting lines from the log Wed Nov 7 21:05:35 2007 Msieve v. 1.29 Wed Nov 7 21:05:35 2007 factoring 21499173951598023655871526129741238864252274176505248438905816972331478841874026717266127459812853910615830233333737201439633130982196868280103768992238493630431515155684471825756247809234523310884830836516644313 (212 digits) Wed Nov 7 21:05:35 2007 commencing number field sieve (212-digit input) Wed Nov 7 21:31:59 2007 found 90496938 unique relations Thu Nov 8 01:06:57 2007 matrix is 7490253 x 7490451 with weight 666958089 (avg 89.04/col) Thu Nov 8 01:07:48 2007 commencing Lanczos iteration Wed Nov 21 12:03:07 2007 lanczos halted after 118456 iterations (dim = 7490189) Wed Nov 21 20:06:52 2007 reading relations for dependency 4 Wed Nov 21 22:47:26 2007 prp86 factor: 17315878129048863927974905480696448369723747093035498799994851681384411684778961025249 Wed Nov 21 22:47:26 2007 prp127 factor: 1241587275642193613677401209382009830084399769371108904801198294935706207364264832500354031378698910359793960404372927442514937 Wed Nov 21 22:47:27 2007 elapsed time 337:41:52 |
SubmitResults() failed
These several days I often get "failed" messages, some times it is "RequestAssignment() failed", some times "Received assignment ( IDLE 0-0)".This maybe because there is no task temporarily.But why there are so many "SubmitResults() failed" messages.I'm really sick about it.
[code] 23:33:59 NFSNET Client - $Revision: 1.17 $ 23:33:59 Initializing... 23:33:59 Initialized. 23:33:59 Requesting assignment... 23:34:01 Received assignment ( REDIRECT 0-0)... 23:34:03 Requesting assignment... 23:34:26 Received assignment ( REDIRECT 0-0)... 23:34:36 Requesting assignment... 23:34:40 Received assignment (Bristol 10_239M_1 10521035-10521291)... 23:34:40 Getting project details... 23:34:46 Building .polys file... 23:34:46 Building .in file... 23:34:46 Updating project .in file with assigned range... 23:34:46 Launching rootfinder... # Find roots of polynomials for Number Field Sieve # Intel x86 (Windows) V 1.0 RC1 # Department of Mathematics, Oregon State University # Corvallis, OR 97331-4605 USA # and # Centrum voor Wiskunde en Informatica # Kruislaan 413 # 1098 SJ Amsterdam # The Netherlands # Running on XHX034 at Mon Nov 26 07:34:46 2007 Polynomials read from projects\10_239M_1\polys.txt. # n = 16238581280990058326129559737298355994869848494632165854073565620201800350 58451047371340703573622691199920338919238967077426600651484835722446569697612562 08962241824944638737217461295395400432531624931235045560966776497496089919 # root = 10000000000000000000000000000000000000000 # npoly = 2 # Polynomial 1: # X - 10000000000000000000000000000000000000000 # Polynomial 2: # X^6 - 10 # Maximal fbbound = 30000000 Roots will be written to projects\10_239M_1\factor_base.txt, in ASCII format. ln_abquot = -0.300, sum_logs_squared = 3973.514 ln_abquot = 0.000, sum_logs_squared = 3933.7122 ln_abquot = 0.300, sum_logs_squared = 3896.4756 ln_abquot = 0.600, sum_logs_squared = 3902.7517 ln_abquot = 0.120, sum_logs_squared = 3917.6835 ln_abquot = 0.390, sum_logs_squared = 3891.7477 ln_abquot = 0.453, sum_logs_squared = 3892.5022 ln_abquot = 0.363, sum_logs_squared = 3892.465 ln_abquot = 0.409, sum_logs_squared = 3891.6287 ln_abquot = 0.422, sum_logs_squared = 3891.7283 ln_abquot = 0.403, sum_logs_squared = 3891.6317 ln_abquot = 0.413, sum_logs_squared = 3891.6431 ln_abquot = 0.407, sum_logs_squared = 3891.6267 ln_abquot = 0.406, sum_logs_squared = 3891.6268 ln_abquot = 0.408, sum_logs_squared = 3891.6271 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.406, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 ln_abquot = 0.407, sum_logs_squared = 3891.6266 Suggested ratio for (max a)/(max b) = 1.502 Caution -- siever input uses (width a) = 2*(max a) and range for b. This is because a can be negative but b must be positive. p = 632881, trials = 8 p = 764689, trials = 8 p = 1007557, trials = 8 p = 1655569, trials = 8 p = 1962997, trials = 8 p = 4173241, trials = 8 p = 5010013, trials = 8 p = 5206321, trials = 8 p = 5619517, trials = 8 p = 6345307, trials = 8 p = 9149053, trials = 8 p = 11089963, trials = 8 p = 12543997, trials = 8 p = 13478917, trials = 8 p = 14795839, trials = 8 p = 16161253, trials = 8 p = 18022801, trials = 8 p = 19295677, trials = 8 p = 19320877, trials = 8 p = 19897333, trials = 8 p = 20637997, trials = 8 p = 22674013, trials = 8 p = 22684369, trials = 8 p = 23675173, trials = 8 p = 25244437, trials = 8 p = 26606557, trials = 8 p = 27651787, trials = 8 p = 28090507, trials = 8 p = 28113037, trials = 8 p = 28274443, trials = 8 At 1800001-st prime: 29005549 Polynomial 1 has 1857859 roots for 1857859 primes 0 factors: 0 primes ( 0.00%) 1 factors: 1857859 primes (100.00%) Polynomial 2 has 1857154 roots for 1857859 primes 0 factors: 1238603 primes ( 66.67%) 1 factors: 2 primes ( 0.00%) 2 factors: 464593 primes ( 25.01%) 3 factors: 0 primes ( 0.00%) 4 factors: 0 primes ( 0.00%) 5 factors: 0 primes ( 0.00%) 6 factors: 154661 primes ( 8.32%) Statistics for sieving over projective space: Polynomial Expected log10 of contribution Variance 1 7.08 27.62 2 6.52 26.72 (random) 7.08 27.45 Statistics for sieving over a line: Polynomial Expected log10 of contribution Variance 1 7.33 27.79 2 6.79 26.82 (random) 7.33 27.53 23:36:09 Rootfinder is finished... 23:36:09 Launching siever... 00:07:53 Siever finished... 00:07:53 Submitting results... 0 00:08:14 SubmitResults() failed, sleeping for 10 seconds. Last message was: 00:12:28 SubmitResults() failed, sleeping for 20 seconds. Last message was: 00:15:58 SubmitResults() failed, sleeping for 40 seconds. Last message was: 0 00:17:00 SubmitResults() failed, sleeping for 80 seconds. Last message was: 00:22:24 SubmitResults() failed, sleeping for 160 seconds. Last message was: 00:29:08 SubmitResults() failed, sleeping for 320 seconds. Last message was:[/code] |
[QUOTE=wreck;119207]These several days I often get "failed" messages, some times it is "RequestAssignment() failed", some times "Received assignment ( IDLE 0-0)".[/QUOTE]
There are two causes of "IDLE" assignments. Although, except in the past few days when the project manager has been away and allowed the assignment queue to become depleted, it is rare that there is nothing available to be assigned, you can also receive "IDLE" assignments because you are "blacklisted" for failure to properly return results. Given your log excerpts, I think that this is likely to be the cause. As to the cause of the communications problem, I have insufficient information. I would like to assign you to another server where I can closely monitor the traffic. I will contact you via e-mail from your registration records. Richard |
| All times are UTC. The time now is 23:58. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.