![]() |
Reserving C160_423xx767_5 phi_5(phi_47(17)) cofactor please.
|
[QUOTE=Dubslow;470994]That's my cue.
I'll start with C179_128_87[/QUOTE] This one looks like it'll take a week or so (plus time 'wasted' by me using the computer), while my 'server' is nearly done with its current batch of GIMPS work, so I'll also move that box to 14e postproc for a bit. To feed it, I'll take C167_6228362269_23. |
Arg! I've had yet another power outage. I was within three days of finishing C227_135_76. I could not resume the LA phase. I even tried from the backup file. So I am starting the 340-350 hour job over again.
I also lost a power supply from my other Core-i5 box. Grr! In the coming weeks I will be relocating to a more stable environment (hopefully) and should have my other box up and running. |
C228_146_66 is done.
Edit: C228_137_69 is now done too. Edit 2: As is "C229_129_54" which is really C229_149_54 |
L1321 done
1 Attachment(s)
[code]
Sun Nov 5 09:20:36 2017 p67 factor: 3042660916534343855991585964130599219853794625912517831108204854519 Sun Nov 5 09:20:36 2017 p85 factor: 7255307835868667283481910808005852591307069118603665674576540391161499487271694146579 Sun Nov 5 09:20:36 2017 p99 factor: 280626788436353522708403945439104648368279863532824820018157209240275040014238188668526976715529181 [/code] About 367 hours on 7 threads E5-2650v2 for 25.48M density-136 (138 didn't work) matrix. Log attached and at [url]https://pastebin.com/RWTssCnV[/url] |
Reserving C229_150_58 from 14e.
Also, please note that factors for C222_143_73 (14e queue) were announced [url=http://www.mersenneforum.org/showpost.php?p=470993&postcount=2228]here[/url]. |
C243_127_113 is done
|
C162_101xx233_3
This C162 factors as:
[CODE]prp62 factor: 39550712670602625730685670230551692539039453670739929686992819 prp100 factor: 4996695645892873526386571310566150898749138731617809838332856903805186509732317809346922337028152519[/CODE] Pastebin: [url]https://pastebin.com/n1NNpufP[/url] Matrix failed at TD=130 and built at TD=120. Approximate time of 30 hours to complete. |
C230_137_84 is done.
|
C230_125_103 and C180_2387363771591_17 are done.
|
C213_142_112 factors as:
[CODE]p68 factor: 14114226980832143871486194808759499713658945344209626238272238608589 p70 factor: 5364245504542426573020357128755111618727219232763014695433209010972749 p76 factor: 6019047739053885626924154517208997822085338212990788552355735444246527698833[/CODE] [url]https://pastebin.com/4idc0DT3[/url] 499 hours for 24.9M dimensions. :smile: |
C200_140_73 is done.
|
C192_1577431459_23 factored
1 Attachment(s)
[QUOTE=richs;470333]Reserving C192_1577431459_23 please.[/QUOTE]
[CODE]p79 factor: 1083969817234306783525936134892821434040095124678074171272497565907396374526041 p114 factor: 315610138847725588787082967232142439592063068356712781618622335948454109886619257012539523194451117881586904557837[/CODE] 49.4 hours on 8 threads i7-5500U with 8 GB memory for a 6.1M matrix at TD = 70 (130 failed). Log attached and at [URL="https://pastebin.com/Z5spRgXB"]https://pastebin.com/Z5spRgXB[/URL] . Factors added to factordb. |
[QUOTE=Dubslow;470994]
I'll start with C179_128_87[/QUOTE] [code]linear algebra completed 13278827 of 13279057 dimensions (100.0%, ETA 0h 0m) lanczos halted after 209998 iterations (dim = 13278832) recovered 40 nontrivial dependencies BLanczosTime: 560188 commencing square root phase handling dependencies 1 to 64 reading relations for dependency 1 read 6637108 cycles cycles contain 20318558 unique relations read 20318558 relations multiplying 20318558 relations multiply complete, coefficients have about 591.14 million bits initial square root is modulo 201151 sqrtTime: 2041 p69 factor: 117785364136821512925601021797774153264921697097231933631065994380963 p111 factor: 124356721327251738236724520252489516481197189145346038072712997655982168684651560177493966260727065140467266891 elapsed time 157:59:13[/code] 132 failed, I believe 120 also failed IIRC hence the chosen td=100. Elapsed time is slightly artifically inflated by a few pauses on the order of an ~hour each. [url]https://pastebin.com/bYzzrqRQ[/url] I'll replace this job with C206 from phi_13(phi_7(5231)/79). Edit: Could someone please explain what the phi means? I figured out via FDB that this number is the cofactor of ((5231^7-1)/371330)^13-1, but I'm not quite sure how that corresponds to the title I reserved. It's not the Euler phi, right? Looks to me more like [$]\sigma((\frac{\sigma(5231^6)}{371330})^{12})[/$]. |
C192_1920647391913_19 and C219_129_92 are done.
|
[QUOTE=Dubslow;471027]I'll take C167_6228362269_23.[/QUOTE]
[code] matrix is 11093476 x 11093701 (4975.6 MB) with weight 1287974535 (116.10/col) sparse part has weight 1193394360 (107.57/col) using block size 8192 and superblock size 589824 for processor cache size 6144 kB commencing Lanczos iteration (4 threads) memory use: 4235.4 MB linear algebra at 0.0%, ETA 128h 7m093701 dimensions (0.0%, ETA 128h 7m) checkpointing every 90000 dimensions93701 dimensions (0.0%, ETA 129h38m) linear algebra completed 11093458 of 11093701 dimensions (100.0%, ETA 0h 0m) lanczos halted after 175434 iterations (dim = 11093473) recovered 35 nontrivial dependencies BLanczosTime: 475366 commencing square root phase handling dependencies 1 to 64 reading relations for dependency 1 read 5547356 cycles cycles contain 18343222 unique relations read 18343222 relations multiplying 18343222 relations multiply complete, coefficients have about 507.86 million bits initial square root is modulo 1295958571 sqrtTime: 1930 p58 factor: 8470359578370412058969804375004353570376895978369804952261 p109 factor: 2753987064573008793490389067269262131434294898646249917899256051708431399479973137914218412394079297311398911 elapsed time 134:06:00[/code] td=132 IIRC. Is this an ECM miss? [url]https://pastebin.com/4aFx006g[/url] To replace it, taking "C176 from phi_11(phi_13(467)/(157*14610077))". Edit: A random question for anyone who can answer. Is disk speed (SSD vs slow HDD) a bottleneck in filtering (or matrix solving or even the sqrt)? IOW, is there any noticeable penalty from using a slow HDD relative to an SSD? |
Reserving C176 from phi_11(phi_13(467)/(157*14610077)) please.
|
[QUOTE=richs;471545]Reserving C176 from phi_11(phi_13(467)/(157*14610077)) please.[/QUOTE]
The post immediately prior to yours was me reserving it. Matrix already in progress. |
Taking phi_5(phi_13(43609)) from 15e.
|
[QUOTE=Dubslow;471546]The post immediately prior to yours was me reserving it. Matrix already in progress.[/QUOTE]
Sorry, I didn't see it. No harm done since I had not started. |
[QUOTE=Dubslow;471539]
td=132 IIRC. Is this an ECM miss? Edit: A random question for anyone who can answer. Is disk speed (SSD vs slow HDD) a bottleneck in filtering (or matrix solving or even the sqrt)? IOW, is there any noticeable penalty from using a slow HDD relative to an SSD?[/QUOTE] Naw, a GNFS 167 should be ECM'ed to around 1.5*t50. A p58 is a ways above that ECM depth. An SSD cuts by a factor of ~10 the remdups time and the reading-relations time within filtering. Matrix solving and sqrt aren't meaningfully affected. On 14e tasks, many of the relation sets have ~25% duplicates, so running remdups (on SSD or spinnyrust) cuts filtering time quite a bit- useful for jobs where you might run multiple target_density choices, or fear the relation set is oversieved. |
[QUOTE=richs;471549]Sorry, I didn't see it. No harm done since I had not started.[/QUOTE]
I was never particularly worried, hardly the end of the world, but I figured quick and simple post means fewest losses. :smile: [QUOTE=VBCurtis;471554]Naw, a GNFS 167 should be ECM'ed to around 1.5*t50. A p58 is a ways above that ECM depth. An SSD cuts by a factor of ~10 the remdups time and the reading-relations time within filtering. Matrix solving and sqrt aren't meaningfully affected. On 14e tasks, many of the relation sets have ~25% duplicates, so running remdups (on SSD or spinnyrust) cuts filtering time quite a bit- useful for jobs where you might run multiple target_density choices, or fear the relation set is oversieved.[/QUOTE] Thanks. I thought the msieve relations reading time was significantly slower than e.g. with remdups, which would indicate that the disk read speed is not the issue? I ask because the SSD in my server is only 40 GB, which isn't really a lot. I think if I ever want to do larger jobs (can't really imagine why though :smile:) I'd probably just have to take the performance hit of running off the HDD. |
l5715a factors
Log at [url]https://pastebin.com/w2mhs2wp[/url]
[code]prp64 factor: 1326547679693216195415207979047945539038010136629272997152371571 prp116 factor: 30763928607881574259775815051085662011220279524652271470895115875379106200949720706713542697293478846342970519422591[/code] Matrix 18.9M at density somewhere in the 90s; the log does not contain all the filtering runs, and I chose the matrix with the shortest ETA among the 3 or 4 filtering runs I tried. The dataset was oversieved, and I played with reducing max-rels and got a matrix of 17.0M with density ~134; however, this had a marginally longer ETA than the 18.9M matrix that used the full dataset. |
[QUOTE=Dubslow;471539]taking "C176 from phi_11(phi_13(467)/(157*14610077))"[/QUOTE]
[code]matrix is 4775799 x 4776024 (1986.9 MB) with weight 507168699 (106.19/col) sparse part has weight 473083513 (99.05/col) using block size 8192 and superblock size 589824 for processor cache size 6144 kB commencing Lanczos iteration (4 threads) memory use: 1670.8 MB linear algebra at 0.0%, ETA 14h48m776024 dimensions (0.0%, ETA 14h48m) checkpointing every 310000 dimensions024 dimensions (0.0%, ETA 15h42m) linear algebra completed 4775628 of 4776024 dimensions (100.0%, ETA 0h 0m) lanczos halted after 75528 iterations (dim = 4775795) recovered 34 nontrivial dependencies BLanczosTime: 73922 commencing square root phase handling dependencies 1 to 64 reading relations for dependency 1 read 2387732 cycles cycles contain 7531148 unique relations read 7531148 relations multiplying 7531148 relations multiply complete, coefficients have about 185.64 million bits initial square root is modulo 4592947 sqrtTime: 525 p76 factor: 2275133046546010463392960754692250408893595743989442021480630407479677451561 p101 factor: 14849250759317137524992662778356622406936708163352353630108803001889391917906208971722727991 869773621 elapsed time 21:24:06[/code] [url]https://pastebin.com/y3n3A7WD[/url] Taking C212_14009_59. |
I'm using 'phi' for what pari/gp calls polcyclo; this is a mistake, I should be calling it Psi and I'll do that in future.
|
C160_423xx767_5 phi_5(phi_47(17)) cofactor factored
1 Attachment(s)
[QUOTE=richs;471025]Reserving C160_423xx767_5 phi_5(phi_47(17)) cofactor please.[/QUOTE]
phi AKA Psi. [CODE]p64 factor: 1515204316408863739022085950018269044027271321712231568768887191 p97 factor: 1751101524725334989858615687926251265575736985429877936761757584363900871069334935470138544447661[/CODE] 46.5 hours on 8 threads i7-5500U with 8 GB memory for a 5.9M matrix at TD = 70 (130 failed). Log attached and at [URL="https://pastebin.com/zXB0Cyx8"]https://pastebin.com/zXB0Cyx8[/URL] . Factors added to factordb. |
[QUOTE=fivemack;471699]I'm using 'phi' for what pari/gp calls polcyclo; this is a mistake, I should be calling it Psi and I'll do that in future.[/QUOTE]
What is a mistake? What does Psi have to do with cyclotomic polynomials? |
Pressure issue, Brexit stuff.
|
[QUOTE=fivemack;471699]I'm using 'phi' for what pari/gp calls polcyclo; this is a mistake, I should be calling it Psi and I'll do that in future.[/QUOTE]The only "mistake" (and this would be really picky) is using "phi" (lower case) instead of "Phi" (capital letter). The lower case phi is often used for the Euler totient function. [tex]\Phi_{n}(x)[/tex] is standard notation for the cyclotomic polynomial for the primitive n-th roots of unity (which has degree [tex]\varphi(n)[/tex]). The usage of "phi" with a subscript (particularly in context) clearly indicated a cyclotomic polynomial.
OTOH, Psi, as in [tex]\Psi(x)[/tex], is standard notation for the Chebyshev function, or summatory von Mangoldt function, featured in a common statement of the Prime Number Theorem, [tex]\Psi(x)[/tex] ~ [tex]x[/tex]. |
C212_72_128, C226_127_106, and C251_909x157_7 are done.
|
C222_141_62 now factored
1 Attachment(s)
C222_141_62 is factored. Log attached. Results added to fdb.
[code] prp63 factor: 538917480030797794364918139369076337153365198344837164839925353 prp160 factor: 1804211482781215266466875569891721269987824223641523355554139070583599412527532683249627602174510829620928104925970046690322556990800861298134522956507407540693 [/code] |
[QUOTE=swellman;471547]Taking phi_5(phi_13(43609)) from 15e.[/QUOTE]
Need a bit of help here - under what label is the quoted composite listed in the NFS@Home data files? Uncertain how many digits are in the composite and I don’t want to guess in any case. (Pari/GP blew up in my face as well.) |
[QUOTE=swellman;471823]Need a bit of help here - under what label is the quoted composite listed in the NFS@Home data files? Uncertain how many digits are in the composite and I don’t want to guess in any case. (Pari/GP blew up in my face as well.)[/QUOTE]
C221_910xx371_5 |
[QUOTE=Dubslow;471531]
I'll replace this job with C206 from phi_13(phi_7(5231)/79). [/QUOTE] Other than the divisor of the inner term being wrong according to [URL="http://factordb.com/index.php?id=1100000000128028191"]FDB (should be 371330, not 79)[/URL], it is now factored. [code]matrix is 9920278 x 9920503 (4107.5 MB) with weight 1040129558 (104.85/col) sparse part has weight 977561712 (98.54/col) using block size 8192 and superblock size 786432 for processor cache size 8192 kB commencing Lanczos iteration (8 threads) memory use: 3497.3 MB linear algebra at 0.0%, ETA 96h38m920503 dimensions (0.0%, ETA 96h38m) checkpointing every 110000 dimensions503 dimensions (0.0%, ETA 96h56m) linear algebra completed 1692918 of 9920503 dimensions (17.1%, ETA 74h55m) linear algebra completed 7202298 of 9920503 dimensions (72.6%, ETA 23h43m) ^Znear algebra completed 7417516 of 9920503 dimensions (74.8%, ETA 21h54m) [1]+ Stopped nice -n 19 ./msieve -t 8 -v -nc "target_density=110" bill@Gravemind ~/nfsathome $ fg nice -n 19 ./msieve -t 8 -v -nc "target_density=110" linear algebra completed 9920160 of 9920503 dimensions (100.0%, ETA 0h 0m) lanczos halted after 156875 iterations (dim = 9920274) recovered 33 nontrivial dependencies BLanczosTime: 327201 commencing square root phase handling dependencies 1 to 64 reading relations for dependency 1 read 4961930 cycles cycles contain 15694454 unique relations read 15694454 relations multiplying 15694454 relations multiply complete, coefficients have about 391.11 million bits initial square root is modulo 10417067 GCD is N, no factor found reading relations for dependency 2 read 4960590 cycles cycles contain 15689492 unique relations read 15689492 relations multiplying 15689492 relations multiply complete, coefficients have about 390.99 million bits initial square root is modulo 10368131 sqrtTime: 2589 p88 factor: 1193522516034362105040618699616861395772386617834280035265027779362427290285892295580789 p119 factor: 36073530885176865878006518674815307691578117630793214230853114402485748757053272465900032755879374326035812612229478679 elapsed time 93:21:23[/code] [url]https://pastebin.com/mYmmSHyg[/url] |
[QUOTE=frmky;471826]C221_910xx371_5[/QUOTE]
Thank you! |
[QUOTE=Dubslow;471849]Other than the divisor of the inner term being wrong according to [URL="http://factordb.com/index.php?id=1100000000128028191"]FDB (should be 371330, not 79)[/URL][/quote]
The divisor is wrong, but not for the reason you state. It should be 71, not 79 - that's a typo - and the extra factor 5230 comes from the definition of phi_n. |
[QUOTE=fivemack;471876]The divisor is wrong, but not for the reason you state. It should be 71, not 79 - that's a typo - and the extra factor 5230 comes from the definition of phi_n.[/QUOTE]
Aha, obviously I'm not very well versed in cyclotomics. Forgot about the algebraic factor (mostly because it's more convenient when working with FDB and yafu). Thanks. |
L3295A done
1 Attachment(s)
[code]
Wed Nov 15 16:11:47 2017 p88 factor: 2523736458627921023145795717278312510332675044476120748935257857252531967979710404049931 Wed Nov 15 16:11:47 2017 p93 factor: 117529146260985677311078059731021197946847596724602818549439979645777947375020581782484159121 [/code] 243.5 hours on 7 cores E5/2650v2 for 20.31M density-130 (not 132) matrix Log attached and at [url]https://pastebin.com/V570tfkz[/url] |
Taking C230_142_86
|
C223_129_100 Factored
1 Attachment(s)
C223_129_100
[code] p66 factor: 300634338355028902107770297818841985362732708412225230534671189001 p158 factor: 13710890265852876939918234253281665271763816874772099511164109834698568681020540058631256270440730166652606033577790433814449742651202525507914405580763759981 [/code] 461048659 raw relations / 345308985 unique relations log attached |
3408_1671 factors
[code]prp88 factor: 1417911815027846802053291284074202683296044263447677506019370900162722120468816094645719
prp93 factor: 145373073512535632032343452896302114311357961994569163931012272061735693010559257271548488289[/code] log at [url]https://pastebin.com/QipkwDme[/url] 411M raw relations, 344M unique. Matrix TD 132 produced 14.6M matrix at density 96. |
Reserving C167_4788_12505.
|
Taking 8081^67-1 (15e).
|
C227_135_76 factored
408M unique relations to build a 20.0M matrix using TD=128. (TD=132 failed).
[CODE]p77 factor: 15964490763617942821554929254720694835604074989287437741802172054773394694877 p151 factor: 4460802349198982408280765869545747805981738719596425412499688908971784347693405040128665910961202600700378834301226877059280416661164739559148356741843[/CODE] [url]https://pastebin.com/WQQF7FwS[/url] |
13*2^828-1 factors
Log at [url]https://pastebin.com/WdPhpLZE[/url]
[code]prp62 factor: 31218770086328113069106512645320557869405043516087342127128143 prp134 factor: 21628664499104777790648131454693068351775431323246347241538024533780289511774766756972222779871836576200837163337851485290669686178819 [/code] This was a 14e/33 job. 650M raw relations, 536M unique was oversieved; TD = 132 produced a density 88 (sparse part density 79) matrix of size 16.5M. I would have requested 410M relations as 32LP; 650M at 33LP is faster to find (in seconds and in Q-range) than 410M at 32, and 650M was too many. |
C196_135_124 done
1 Attachment(s)
[code]
Sat Nov 18 19:59:10 2017 p75 factor: 493256075733235828719603706694310826387263994582477684236086531883097166311 Sat Nov 18 19:59:10 2017 p121 factor: 9672920637877273831373571056996202576954575529763025727282423366872027650358677672581556097977772035594575812103918000419 [/code] Uneventful except for the time taken: [code] Thu Aug 31 10:27:56 2017 using block size 8192 and superblock size 1474560 for processor cache size 15360 kB Thu Aug 31 10:33:54 2017 commencing Lanczos iteration (6 threads) Thu Aug 31 10:33:54 2017 memory use: 20910.7 MB Thu Aug 31 10:38:17 2017 linear algebra at 0.0%, ETA 2030h49m Thu Aug 31 10:39:39 2017 checkpointing every 30000 dimensions Sat Nov 18 00:12:52 2017 lanczos halted after 702030 iterations (dim = 44393476) Sat Nov 18 00:13:30 2017 recovered 31 nontrivial dependencies Sat Nov 18 00:13:34 2017 BLanczosTime: 6798080 Sat Nov 18 00:13:34 2017 elapsed time 1888:21:21 [/code] The power is reasonably reliable here. The machine is an i7-5820K with 32G memory, the job used all six cores and somewhere in the high twenties gigabytes of memory, for two and a half months. Log attached and at [url]https://pastebin.com/n7CxLdX8[/url] |
Taking C225_127_99
|
Fib(1301) factored
1 Attachment(s)
[code]
prp63 factor: 464757559194769853726213324625064879181539715631999236468111969 prp175 factor: 8193546379731162179523472180410007127751480250569760816583372116622037859397275686819201053586709225997727186508039084129493539555104825563023159480632525453874769217081815617 [/code] 447336938 raw relations / 349780835 unique relations |
Interesting reservation on [URL="http://stdkmd.com/nrr/c.cgi?q=reserved_and_submitted"]NRR website[/URL]:
[QUOTE]NFS@Home [URL="http://stdkmd.com/nrr/c.cgi?q=54441_248"]54441_248[/URL] (c249 / 584), [URL="http://stdkmd.com/nrr/c.cgi?q=18883_291"]18883_291[/URL] (c169 / 556)[/QUOTE]That's 584 and 556 days ago! Forgotten? |
C167_4788_12505 completed - 35 hours on 6.5M matrix (TD=130).
[CODE]prp81 factor: 108245084820212828887811387291343044089098703303043861848964083476501114791928049 prp87 factor: 164776850682591236198306472638767305357148967258041266597583412403064319946059801422289 [/CODE] [url]https://pastebin.com/4hqjQkx6[/url] |
phi_5(phi_13(43609)) AKA C221_910xx371_5 factored
1 Attachment(s)
[code]
p47 factor: 21772216960834806224487967871473176857756729891 p175 factor: 4182429556161917245112740601227598656069592376981401111311120523661003885031813158680936757414863529587165418173365083289474648930625024240428689551999738035829245075739168281 [/code] 293195900 raw relations / 261966696 unique relations |
C229_150_58 factored
1 Attachment(s)
[code]
prp111 factor: 115021043173117329871351074829678254557238693278336460668718279320571987842922667082383217932630922021091627853 prp119 factor: 19813231686987351304078425005299271036457163588028760723953792768712136916908876513116956741344392781446905050190752201 [/code] Nice split! 260440648 raw relations / 199643658 unique relations |
Reserving C163_636xx487_11 aka Phi_11(Phi_47(7)/13722816749522711)) for postprocessing.
|
[QUOTE=Batalov;472228]Interesting reservation on [URL="http://stdkmd.com/nrr/c.cgi?q=reserved_and_submitted"]NRR website[/URL]:
That's 584 and 556 days ago! Forgotten?[/QUOTE] Yes, that looks forgotten to me, and I'm not entirely sure that it's not my fault. I'll see if I can put in an SNFS polynomial for the bigger one today; would someone be willing to do the polynomial search for the C169? |
[QUOTE=fivemack;472303]... would someone be willing to do the polynomial search for the C169?[/QUOTE]
There is one posted but not sure how good it is. BTW: C160_423xx767_5 phi_5(phi_47(17)) cofactor is [url=http://www.mersenneforum.org/showpost.php?p=471700&postcount=2257]done[/url]. C227_135_76 is [url=http://www.mersenneforum.org/showpost.php?p=472113&postcount=2275]done[/url]. |
These numbers are already cracked by ECM somewhere in May 2016. I've sent the factors to Lionel (because only he can submit them to Near-repdigit project site), but seems he forgot about that. Fortunately I didn't clean private messages, so here they are.
54441_248 Using B1=110000000, B2=776278396540, polynomial Dickson(30), sigma=809860913 Step 1 took 1134134ms Step 2 took 299048ms ********** Factor found in step 2: 43802120335384069597226422115845628792717180855184283 Found probable prime factor of 53 digits: 43802120335384069597226422115845628792717180855184283 18883_291 Using B1=43000000, B2=240490660426, polynomial Dickson(12), sigma=4050464285 Step 1 took 239896ms Step 2 took 71135ms ********** Factor found in step 2: 3208074107647063018326273101562135693329271942833 Found probable prime factor of 49 digits: 3208074107647063018326273101562135693329271942833 |
[QUOTE=unconnected;472305]These numbers are already cracked by ECM somewhere in May 2016. I've sent the factors to Lionel (because only he can submit them to Near-repdigit project site), but seems he forgot about that. Fortunately I didn't clean private messages, so here they are.
54441_248 Using B1=110000000, B2=776278396540, polynomial Dickson(30), sigma=809860913 Step 1 took 1134134ms Step 2 took 299048ms ********** Factor found in step 2: 43802120335384069597226422115845628792717180855184283 Found probable prime factor of 53 digits: 43802120335384069597226422115845628792717180855184283[/quote] Cofactor [code]12429636745338862346787610397157798967438673266228138014594683957581473945519877837579203042380697665621892796222717406038384025275189360158942771356863560554226966033451078758861356640093824353627[/code] is C197 [quote]18883_291 Using B1=43000000, B2=240490660426, polynomial Dickson(12), sigma=4050464285 Step 1 took 239896ms Step 2 took 71135ms ********** Factor found in step 2: 3208074107647063018326273101562135693329271942833 Found probable prime factor of 49 digits: 3208074107647063018326273101562135693329271942833[/QUOTE]Dividing out this, and the factors already found and shown [url=http://stdkmd.com/nrr/c.cgi?q=18883_291]here[/url], the remaining cofactor is P120 (prime, according to Pari-GP isprime()). Congratulations, this one is done! |
I've just added those factors to factordb as well. So it now knows 10^248*49-31 is partly factored and that 17*10^291-53 is fully factored.
Chris |
Fib(1301) Factored
Reported here: [url]http://www.mersenneforum.org/showpost.php?p=472218&postcount=2279[/url]
147^53+53^147 cofactor has another 20 days to go. 13_2_828_m factors reported here: [url]http://www.mersenneforum.org/showpost.php?p=472130&postcount=2276[/url] Others reported [url=http://www.mersenneforum.org/showpost.php?p=472304&postcount=2286]here[/url], reposted for convenience. |
Reserving Phi_13(Phi_5(Phi_13(17)/212057)/41*31*1708293108577921) for postprocessing.
What’s the name of the NFS data file? |
[QUOTE=swellman;472401]Reserving Phi_13(Phi_5(Phi_13(17)/212057)/41*31*1708293108577921) for postprocessing.
What’s the name of the NFS data file?[/QUOTE] I know it is a bit cumbersome but this might lend a [url=http://www.mersenneforum.org/showpost.php?p=472219&postcount=1249]clue[/url]. |
[QUOTE=RichD;472403]I know it is a bit cumbersome but this might lend a [url=http://www.mersenneforum.org/showpost.php?p=472219&postcount=1249]clue[/url].[/QUOTE]
Many thanks! Sorry for the dumb question over nomenclature that you’d clarified in your original post. Should be factored by midweek. |
I'll take 6867^67-1 (15e) next.
|
Reserving C200 from 135^88+88^135 (15e).
|
Phi_17(5366319547249)
[code]matrix is 11715858 x 11716083 (5247.2 MB) with weight 1349911266 (115.22/col)
sparse part has weight 1258349627 (107.40/col) using block size 8192 and superblock size 786432 for processor cache size 8192 kB commencing Lanczos iteration (8 threads) memory use: 4461.7 MB linear algebra at 0.0%, ETA 170h55m716083 dimensions (0.0%, ETA 170h55m) checkpointing every 70000 dimensions16083 dimensions (0.0%, ETA 176h13m) ^Znear algebra completed 750716 of 11716083 dimensions (6.4%, ETA 128h40m) [1]+ Stopped nice -n 19 ./msieve -t 8 -v -nc "target_density=120" bill@Gravemind ~/nfsathome $ fg nice -n 19 ./msieve -t 8 -v -nc "target_density=120" linear algebra completed 1934473 of 11716083 dimensions (16.5%, ETA 122h18m) linear algebra completed 2270066 of 11716083 dimensions (19.4%, ETA 119h44m) linear algebra completed 7413001 of 11716083 dimensions (63.3%, ETA 55h31m) linear algebra completed 11715856 of 11716083 dimensions (100.0%, ETA 0h 0m) lanczos halted after 185277 iterations (dim = 11715856) recovered 31 nontrivial dependencies BLanczosTime: 542055 commencing square root phase handling dependencies 1 to 64 reading relations for dependency 1 read 5858094 cycles cycles contain 19572722 unique relations read 19572722 relations multiplying 19572722 relations multiply complete, coefficients have about 558.52 million bits initial square root is modulo 102559 GCD is N, no factor found reading relations for dependency 2 read 5860071 cycles cycles contain 19580964 unique relations read 19580964 relations multiplying 19580964 relations multiply complete, coefficients have about 558.75 million bits initial square root is modulo 103177 sqrtTime: 4101 p90 factor: 381475207938506558696353205609056845268892924407587218597064456424792097904291059618648069 p115 factor: 1239818638545877679488938441425847441840256093576211971283712619915788114238844779543292151930793423935492252512029 elapsed time 153:14:35 [/code] [url]https://pastebin.com/nKY3JvWX[/url] |
1 Attachment(s)
[QUOTE=swellman;472277]Reserving C163_636xx487_11 aka Phi_11(Phi_47(7)/13722816749522711)) for postprocessing.[/QUOTE]
[code] p54 factor: 741207571058904885004584283677091501061192032631204587 p109 factor: 1457760428112017864432310317968177327722945809643137425759132598697446375685699430616015771117132708788258841 [/code] 120503535 raw relations / 105683470 unique relations |
I'll take 130^99+99^135 C215 cofactor (15e) next.
|
[QUOTE=RichD;472532]I'll take 130^99+99^135 C215 cofactor (15e) next.[/QUOTE]
It’s mislabeled in NFS@Home. Should be [url=http://www.mersenneforum.org/showpost.php?p=471958&postcount=1242]130^99+99^130[/url]. |
[QUOTE=swellman;472537]It’s mislabeled in NFS@Home. Should be [url=http://www.mersenneforum.org/showpost.php?p=471958&postcount=1242]130^99+99^130[/url].[/QUOTE]
Ah, OK. I probably won't start the download until tomorrow when the last few rels come in. My previous post-processing won't complete until Wednesday morning so that will be the start time for this one. |
C225_127_99 done
1 Attachment(s)
[code]
p83 factor: 15702558987404296455042685007156088519743099873405710365647363933559768661338320983 p143 factor: 35935571613332949234368874538212114193800463261422279515073567567268680587216876239025816380015634643820331190557316824912320058733276419178493 [/code] Log attached and at [url]https://pastebin.com/ZiWBzkaH[/url] 114.3 hours on 6 cores i7-5820K for 13.77M matrix at density 148 (150 didn't work) |
Taking C202_148_51 (15e)
|
[QUOTE=swellman;472401]Reserving Phi_13(Phi_5(Phi_13(17)/212057)/41*31*1708293108577921) for postprocessing.
What’s the name of the NFS data file?[/QUOTE] There seems to be a problem with the data file. Each and every relation gives a -11 error message. Remdups runs flawlessly, reporting 102207459 unique relations with 17751201 duplicates and 742 bad relations on ~120M raw relations. But then zero relations are accepted by msieve. Advice? Is there a problem with the poly as uploaded? I can’t find any such error - I’m not sure how to proceed. |
[QUOTE=swellman;472579]There seems to be a problem with the data file. Each and every relation gives a -11 error message. Remdups runs flawlessly, reporting 102207459 unique relations with 17751201 duplicates and 742 bad relations on ~120M raw relations. But then zero relations are accepted by msieve.
Advice? Is there a problem with the poly as uploaded? I can’t find any such error - I’m not sure how to proceed.[/QUOTE] Can you post one or two relations to see if they correspond to the poly? |
[QUOTE=swellman;472579]There seems to be a problem with the data file. Each and every relation gives a -11 error message. Remdups runs flawlessly, reporting 102207459 unique relations with 17751201 duplicates and 742 bad relations on ~120M raw relations. But then zero relations are accepted by msieve.
Advice? Is there a problem with the poly as uploaded? I can’t find any such error - I’m not sure how to proceed.[/QUOTE] Remove the second set of R0/R1 and try again. i.e, remove the two lines R1 1 R0 -10129... |
[QUOTE=Dubslow;471620]
Taking C212_14009_59.[/QUOTE] This will be done within approximately 24 hours, so I'll take Phi_43(548557) to replace it (and should take roughly the same amount of time). |
C230_142_86 done
1 Attachment(s)
[code]
p74 factor: 37327906105621484270025109795137204412068331164937144751618381838547048861 p156 factor: 528944388661023718474697360192086111285434681374233764335448076778288955546680226278097585373006378117026505045905926705308721431711509079513043094798563201 [/code] About 211.9 hours on 7 threads E5-2650v2 for 20.38M matrix at density 130 (132 didn't work). Log attached and at [url]https://pastebin.com/QxEMWkDs[/url] |
Taking C208_146_108
|
[QUOTE=RichD;472582]Remove the second set of R0/R1 and try again.
i.e, remove the two lines R1 1 R0 -10129...[/QUOTE] This seemed to do the trick. Removed the offending second set of lines and msieve chugged along with no complaints. Will check it tonight after work but it should be done in a day or so. [QUOTE=axn;472581]Can you post one or two relations to see if they correspond to the poly?[/QUOTE] No need (I hope!) but thank you for the assistance. |
[QUOTE=Dubslow;471620]
Taking C212_14009_59.[/QUOTE] [code]matrix is 17846563 x 17846788 (7433.4 MB) with weight 1890453712 (105.93/col) sparse part has weight 1770150045 (99.19/col) using block size 8192 and superblock size 589824 for processor cache size 6144 kB commencing Lanczos iteration (4 threads) memory use: 6393.8 MB restarting at iteration 72736 (dim = 4599557) checkpointing every 60000 dimensions17846788 dimensions (25.8%, ETA 261h47m) linear algebra at 25.8%, ETA 269h13m17846788 dimensions (25.8%, ETA 269h13m) linear algebra completed 5569950 of 17846788 dimensions (31.2%, ETA 269h30m) linear algebra completed 6863764 of 17846788 dimensions (38.5%, ETA 233h20m) linear algebra completed 8703158 of 17846788 dimensions (48.8%, ETA 192h33m) linear algebra completed 11920553 of 17846788 dimensions (66.8%, ETA 124h 5m) linear algebra completed 17846561 of 17846788 dimensions (100.0%, ETA 0h 0m) lanczos halted after 282229 iterations (dim = 17846561) recovered 33 nontrivial dependencies BLanczosTime: 987812 elapsed time 274:23:34 bill@Guilty-Spark:~/ggnfs$ bill@Guilty-Spark:~/ggnfs$ bill@Guilty-Spark:~/ggnfs$ bill@Guilty-Spark:~/ggnfs$ bill@Guilty-Spark:~/ggnfs$ nice -n 19 ./msieve -t 4 -v -nc3 "target_density=110" Msieve v. 1.53 (SVN 991M) Tue Nov 28 15:15:14 2017 random seeds: 7335779f 931343bd factoring 16162234374030954651062542469324941508405506194622290501101866664751734964448062528589459783403444764703106052467195460430696272229863871290077241590772985371501246636945120829551322430809412338842281631760237413 (212 digits) no P-1/P+1/ECM available, skipping commencing number field sieve (212-digit input) R0: 291119537669624727213343933518641575244401 R1: -1 A0: -14009 A1: 0 A2: 0 A3: 0 A4: 0 A5: 0 A6: 1 skew 4.91, size 1.340e-12, alpha 1.413, combined = 1.496e-13 rroots = 2 commencing square root phase handling dependencies 1 to 64 reading relations for dependency 1 read 8921386 cycles cycles contain 29285820 unique relations read 29285820 relations multiplying 29285820 relations multiply complete, coefficients have about 772.47 million bits initial square root is modulo 8515081 sqrtTime: 3231 p98 factor: 26396311753925937190862558536206978591657244409785788810975806024794939400221800144048796395568607 p114 factor: 612291388459872130335142616522385637360492193115803628121516689319114698213184101343075167999811658116073635372859 elapsed time 00:53:52[/code] [url]https://pastebin.com/1XvaDZeH[/url] |
Reserving 54441_249 (15e).
|
C231_133_73 is queued on 14e but is listed as a 32-bit job even though it was tested and uploaded as a 31-bit job. No big deal if the ‘bits’ field can’t be changed once the job is sieving, just pointing it so we don’t chase an additional unnecessary 200M+ relations. The .poly file on NFS@Home data page confirms it is a 31-bit job.
If it is indeed now a 32-bit job by the gatekeeper’s judgment then I retract my comment. Either way I’m willing to reserve this number. |
Thanks for catching that one!
The 'bits' field can be changed at any time, all it affects is the recommendation for maximum Q given by the admin page. |
8081^67-1 (15e) factored
186.1M unique relations to build a matrix at 16.8M using TD=120.
[CODE]p71 factor: 13018465391791877739534473346053605419326814239734112223261860630234563 p188 factor: 60005491813845874845922138955322440313760259802388555416644486485401844829638861704147029280521023186384482791513684974252895215136658127172706273551190471659942821733672921209552246986769[/CODE] [url]https://pastebin.com/eTbixnL2[/url] |
1 Attachment(s)
Phi_13(Phi_5(Phi_13(17)/212057)/41*31*1708293108577921) factored
[code] prp40 factor: 8328516800907981747615892893604213469249 prp128 factor: 29023524528146565011072844291082294417807204916208789404156435394760467534360384038556657772859476546887720723164295017003532931 [/code] |
[QUOTE=swellman;472686]Phi_13(Phi_5(Phi_13(17)/212057)/41*31*1708293108577921) factored
[code] prp40 factor: 8328516800907981747615892893604213469249 prp128 factor: 29023524528146565011072844291082294417807204916208789404156435394760467534360384038556657772859476546887720723164295017003532931 [/code][/QUOTE] A p40 - yikes!!! I was told this was in the batch that had 10,000 @ 43e6. The previous factor is a p38. I wonder if they stopped after finding that one. I didn't run anymore curves because 10K @ 43e6 was enough (supposedly) for an SNFS-227. |
[QUOTE=RichD;472690]A p40 - yikes!!!
I was told this was in the batch that had 10,000 @ 43e6. The previous factor is a p38. I wonder if they stopped after finding that one. I didn't run anymore curves because 10K @ 43e6 was enough (supposedly) for an SNFS-227.[/QUOTE] Hmm yes; ecm-toy reckons essentially zero probability for even a remaining p45 after 10k @ 43e6. |
I'm sorry to nag, but would people mind doing the pastebin upload as well as attaching compressed logs? Or is there a problem with pastebin access being blocked by some network provider?
|
[QUOTE=fivemack;472711]I'm sorry to nag, but would people mind doing the pastebin upload as well as attaching compressed logs? Or is there a problem with pastebin access being blocked by some network provider?[/QUOTE]
When the push to use pastebin started I did try to post my results there. Got mulitple viruses across several of my machines all with decent AV protection. Took a weekend to get them all scrubbed. Sorry, but I won’t go there again. Is there an safe alternative? |
[QUOTE=RichD;472690]A p40 - yikes!!!
I was told this was in the batch that had 10,000 @ 43e6. The previous factor is a p38. I wonder if they stopped after finding that one. I didn't run anymore curves because 10K @ 43e6 was enough (supposedly) for an SNFS-227.[/QUOTE] No worries. It was such a fast SNFS effort that applying “proper” levels of ECM would probably taken > 1/3 * time_SNFS. Not a big deal. |
6867^67-1 (15e) factored
(really 6863^67-1)
200M unique relations to build a 12.0M matrix using TD=132 (136 failed). [CODE]p111 factor: 846694305477709720259619292669734792482240938014752030025440527027725521799338428108899756401706088088679430523 p143 factor: 19154048072130794265107793995641965997475939746992910048793095165056125031180414728267212301314669373079195222008688528022110014812486853012291[/CODE] [url]https://pastebin.com/wDEYZiqH[/url] |
For the sake of information and display, on the 14e status page, "C212_14009_59" listed as "queued for postproc" was in fact finished a [URL="http://mersenneforum.org/showpost.php?p=472629&postcount=2310"]week ago[/URL], while Phi_43(548557) listed as "sieving" is already a week into solving the matrix, with ~2 weeks to go, so is probably better displayed under "postproc".
|
[QUOTE=Dubslow;473190]For the sake of information and display, on the 14e status page, "C212_14009_59" listed as "queued for postproc" was in fact finished a [URL="http://mersenneforum.org/showpost.php?p=472629&postcount=2310"]week ago[/URL], while Phi_43(548557) listed as "sieving" is already a week into solving the matrix, with ~2 weeks to go, so is probably better displayed under "postproc".[/QUOTE]
[b]THEY ARE UPDATED[/b] |
Reserving C188_895087_37
|
C202_148_51 done
1 Attachment(s)
[code]
Wed Dec 6 03:19:07 2017 p95 factor: 28873896146276988990016400601931128662015749403749336032806793714629051064632509421212732949881 Wed Dec 6 03:19:07 2017 p108 factor: 274399655097307948245184688155783809991621747346375733730353019922160030934130460230187883709292248923963337 [/code] 142 hours for 15.37M density-146 (not 148) matrix on six cores i7/5820K Log attached and at [url]https://pastebin.com/XQunqECz[/url] |
C208_146_108 done
1 Attachment(s)
[code]
Thu Dec 7 06:53:53 2017 p90 factor: 735502387181819419212085263530730618316555463093087133089460445055045224663759177313417113 Thu Dec 7 06:53:53 2017 p119 factor: 11440540637606887376566564666973700945050055435819505484588734191253928734967813271038243785805036387841784637114454437 [/code] 99.1 hours on 7 cores E5-2650v2 for a 13.48M density-148 (not 150) matrix. Log attached and at [url]https://pastebin.com/zdw2upzZ[/url] |
Taking C223_14083_59
|
Taking C258 from 107^128+1 (15e).
|
| All times are UTC. The time now is 20:46. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.