![]() |
1 Attachment(s)
[QUOTE=swellman;459384]Reserving 131^69+69^131 C202 cofactor (15e).
[/QUOTE] [code] prp96 factor: 475178751921526521616170440614194850226898411357838208494925916422150006251819306339907364604543 prp107 factor: 20044246039271762304121571326085852439837699385326752782575044670069472065707038255674125265202123108956991 [/code] |
134^116+116^134 factored
62 hours to solve a 9.3M matrix using -t 4, w/ TD=140.
[CODE]p69 factor: 109735561309304560221463133474329535600764091143693568715954505671461 p71 factor: p71 factor: 6775210541530083892428220406499236587806575242514773080006251961226510167752105415300838924282204064992365878065752425147730800062519612265101 p72 factor: 418953001977940720655803018952893507752913636124211978751654299380920009[/CODE][URL]https://pastebin.com/JXpDABNR[/URL] |
144^91+91^144
144^91+91^144 from the 15e queue looks undersieved. Can some more Q be added? I would also like to reserve it for postprocessing.
I plan to begin postprocessing the aliquot term C191_ 3408_1668 in five or six days. |
[QUOTE=swellman;459912]144^91+91^144 from the 15e queue looks undersieved. Can some more Q be added? I would also like to reserve it for postprocessing.
I plan to begin postprocessing the aliquot term C191_ 3408_1668 in five or six days.[/QUOTE] Added 100MQ and reserved it for you |
Thanks!
I'm a little worried about the upcoming 33-bit job, but if I can get through filtering and successfully build a matrix (yes those are very big ifs) I will grind it out until factors appear even if takes a year. I'm expecting the job to be 3-4K hours but who knows. |
My current work has just completed, and I lack the desire to find something which requires a whole lot of manual attention, so: is there enough slack in the queue here to put to use a Sandy Bridge quad core full time for LA? (Hyperthreaded, 16 GiB of memory. For instance would C198 cofactor of 149^50+50^149 fit into 16 GiB? If so, I'd like to take it...)
|
[QUOTE=Dubslow;459933]My current work has just completed, and I lack the desire to find something which requires a whole lot of manual attention, so: is there enough slack in the queue here to put to use a Sandy Bridge quad core full time for LA? (Hyperthreaded, 16 GiB of memory. For instance would C198 cofactor of 149^50+50^149 fit into 16 GiB? If so, I'd like to take it...)[/QUOTE]
I haven't tried my attempt at something off the 15e queue but C212_142_108 (14e) is similar size and is ready for post-processing. It might be a touch over-sieved but I don't know for sure since I have only done one 32-bit job - which did fit into 14GB. |
[QUOTE=RichD;459946]I haven't tried my attempt at something off the 15e queue but C212_142_108 (14e) is similar size and is ready for post-processing. It might be a touch over-sieved but I don't know for sure since I have only done one 32-bit job - which did fit into 14GB.[/QUOTE]
Oops, didn't realize that (Est pending rels == 0) != finished. I see the one I mentioned is only 7% pushed. I guess I'll reserve C212_142_108, thanks. |
I've done a few 14e/32 jobs and they worked with 16 Gb. Shouldn't be a problem for you Dubslow.
|
C210_135_71 done
1 Attachment(s)
[code]
Sat May 27 23:19:58 2017 p56 factor: 14890777600853469857244624614786941908156913045320315661 Sat May 27 23:19:58 2017 p154 factor: 9636930188930775344736857589835777521984369595892932433384107728307325985183021559035364488795991094064629783047830412082179608636500487666270639951915121 [/code] 102.8 hours for 13.78M matrix at density 134 on 7 cores E5-2650v2. Log attached and at [url]https://pastebin.com/y5KhswPZ[/url] |
Reserving C196_122_115
|
1 Attachment(s)
[QUOTE=swellman;458304]Reserving C209_125_122.[/QUOTE]
[code] p66 factor: 369744497817144851928121752816924890384811142365819376294662266519 p143 factor: 74011699138092934148533955605337519694090993270500257938791907887503605009066539729646158297112125468428946755108898795500082671779113192732687 [/code] |
C208_147_50 done
1 Attachment(s)
[code]
Thu Jun 1 04:20:31 2017 p89 factor: 30510567672320150496140585760622833471457556706086748340096383131104354374946596460756817 Thu Jun 1 04:20:31 2017 p119 factor: 39412196887213221291494488398533002405869135623474286744555959147790015996701941651741080813659418325918426825122503233 [/code] 102.1 hours on seven cores E5-2650v2 for 13.93M matrix at density 134 (not enough relations for 144). Log attached and at [url]https://pastebin.com/AahtT5Dx[/url] |
13_2_798m done
I requsted too small a Q-range, so I added 10M relations sieved locally for a total of 329M raw relations, which failed at TD 118 but built an 11.0M matrix at TD 110:
[code]prp72 factor: 648597011262834265317662990171210395571524237282851087313799505428573029 prp126 factor: 533840495553763181217648221549448416239749300447275751629469252837461089798732980935873991638483051103151412537716109564075833[/code] Log at [url]https://pastebin.com/1SyR2LSv[/url] |
C191_3408_1668 is now in LA.
[code] linear algebra completed 117605 of 28957954 dimensions (0.4%, ETA 1149h51m) [/code] So it should finish in late July. Also, can a few more 10M rels be added to 144^91+91^144? 450M is not terribly undersieved but seems low for a SNFS 285. |
I'll take C195_148_98 next.
|
[QUOTE=swellman;460504]C191_3408_1668 is now in LA.
[code] linear algebra completed 117605 of 28957954 dimensions (0.4%, ETA 1149h51m) [/code] So it should finish in late July.[/QUOTE] Thank you for joining me at the long-duration processing party :) |
[QUOTE=fivemack;460551]Thank you for joining me at the long-duration processing party :)[/QUOTE]
It's not a sprint but a marathon. |
Reserving C193_251xx527_13 (14e)
|
C197_128341_47 completed
90 hours (w/ power outage) to solve a 11.1M matrix using -t 4, TD=116 (TD=120 failed)
[CODE]p96 factor: 657108078029346655399091284465980148842276635597509158534125757148709892984873757359860826313193 p102 factor: 115915161569017546344103173820085214780022677858131454927806051787376078785325836022612335559728485011[/CODE][URL]https://pastebin.com/KYHhJ9Wj[/URL] |
C165_11040_10029 completed - 32 hours for 6.4M matrix (TD=140).
[CODE]prp53 factor: 34103109443849495177896175691114117893438328606925163 prp113 factor: 16070474827952312342192163780294704377514217913933538949837833089760181757598081418805620722439293963881957592123 [/CODE] [url]https://pastebin.com/F84WDtjM[/url] |
Reserving C213_142_74 and C214_123_104 to run over my vacation
|
C196_122_115 completed
1 Attachment(s)
[code]
Fri Jun 9 10:24:10 2017 p94 factor: 3039892778912793361201771005686048145522692611382470197611184121929106931468888679935052844217 Fri Jun 9 10:24:10 2017 p103 factor: 2254121839148977015614498434244843249605090943955621615427704937359441151861437413950838992327560382557 [/code] About 165 hours on 7 cores E5-2650v2 for a 17.65M density-134 matrix. Log attached and at [url]https://pastebin.com/KcnbZDW1[/url] |
[QUOTE=Dubslow;459948]
I guess I'll reserve C212_142_108, thanks.[/QUOTE] Seems to have hit the sweet spot for me. Used a lot of memory but left plenty for the rest, and took forever too on this now-ancient Sandy Bridge. [code]sqrtTime: 5302 p58 factor: 1391929062860331425902857251763306743480223968456854045037 p155 factor: 65467419602266995709772892309380211820458854348912123591852209875762471813553433934696213492685398928880813312245816156583660769669305599142672370356223889[/code] Log: [pastebin]09N3sK0N[/pastebin] |
It looks like C176_104281_47 is unclaimed, so I'll take it.
|
2 Attachment(s)
I just started running msieve on C193_251xx527_13. All I am getting is a running list of error -11 reading each relation. I think the poly and the fb files are not in sync based on prior experience.
In the poly file, m is positive while in the fb file, R0 is negative (the numbers are the same). Is this incorrect, or is there another problem that I should look for? I would be appreciative if someone with more knowledge would take a look and advise. Thanks! |
[QUOTE=richs;461245]I just started running msieve on C193_251xx527_13. All I am getting is a running list of error -11 reading each relation. I think the poly and the fb files are not in sync based on prior experience.
In the poly file, m is positive while in the fb file, R0 is negative (the numbers are the same). Is this incorrect, or is there another problem that I should look for? I would be appreciative if someone with more knowledge would take a look and advise. Thanks![/QUOTE] Remove the second set of R0/R1 lines from the .fb file. i.e, the R1 1 R0 <negative big number> and try again. |
Thanks very much, Rich! That worked like a charm....
Rich |
I have a 32GB server node that needs a task while I'm away from campus for the summer, so I'll reserve 135^124+124^135 (GNFS-196) from the 15e queue.
It's a Core2-era 8 cpu node, so it won't be finished quickly! |
[QUOTE=VBCurtis;461359]I have a 32GB server node that needs a task while I'm away from campus for the summer, so I'll reserve 135^124+124^135 (GNFS-196) from the 15e queue.
It's a Core2-era 8 cpu node, so it won't be finished quickly![/QUOTE] At the rate that queue is moving that number may not be ready for post-processing until the end of summer. :smile: |
[QUOTE=RichD;461366]At the rate that queue is moving that number may not be ready for post-processing until the end of summer. :smile:[/QUOTE]
I hope that's not true! I leave 10 Jul for a lengthy vacation, and won't be back to my office (and the server) until mid to late Aug. 130^121 should be done in 7-9 days, a day or three to top off the GNFS-195, and 2 weeks for the GNFS 196 should make it just in time. :mike::smile: |
[QUOTE=VBCurtis;461370]I hope that's not true! I leave 10 Jul for a lengthy vacation, and won't be back to my office (and the server) until mid to late Aug.
130^121 should be done in 7-9 days, a day or three to top off the GNFS-195, and 2 weeks for the GNFS 196 should make it just in time. :mike::smile:[/QUOTE] But there is a loner lurking in the queue. :smile: |
[QUOTE=VBCurtis;461359]I have a 32GB server node that needs a task while I'm away from campus for the summer, so I'll reserve 135^124+124^135 (GNFS-196) from the 15e queue.
It's a Core2-era 8 cpu node, so it won't be finished quickly![/QUOTE] [QUOTE=VBCurtis;461370]I hope that's not true! I leave 10 Jul for a lengthy vacation, and won't be back to my office (and the server) until mid to late Aug. 130^121 should be done in 7-9 days, a day or three to top off the GNFS-195, and 2 weeks for the GNFS 196 should make it just in time. :mike::smile:[/QUOTE] VBCurtis - It would be a terrible waste of assets to have your 32Gb machine sitting idle all summer because of scheduling issues. I have not yet started on 144^91+91^144, which is ready for immediate start. Do you want to take it? It will be a challenging job, one that may require 32Gb. Or take 148^83 + 83^148 if you prefer but it won't be ready for sieving for some weeks. My own 32Gb box is still working the GNFS(190.47) job and won't be done until late July. I yield whichever numbers you wish to take. Have at 'em! |
[QUOTE=swellman;461376]VBCurtis - It would be a terrible waste of assets to have your 32Gb machine sitting idle all summer because of scheduling issues. I have not yet started on 144^91+91^144, which is ready for immediate start. Do you want to take it? It will be a challenging job, one that may require 32Gb. Or take 148^83 + 83^148 if you prefer but it won't be ready for sieving for some weeks.
My own 32Gb box is still working the GNFS(190.47) job and won't be done until late July. I yield whichever numbers you wish to take. Have at 'em![/QUOTE] Thanks, Sean. I'll take 144_91+91^144, then. I should be able to get it started this week. |
[QUOTE=VBCurtis;461460]Thanks, Sean. I'll take 144_91+91^144, then. I should be able to get it started this week.[/QUOTE]
Outstanding. We can play the reservations by ear - if 144_91+91^144 finishes before you leave town (which is no sure thing), feel free to grab the biggest remaining sieved job. I'm around all summer so I have more flexibility. |
Reserving C254_133_85 (14e).
|
C193_251xx527_13 Done
1 Attachment(s)
[CODE]p92 factor: 11347694591236600708982815312924109388078106280421755839062288448624422642854650695513674833
p102 factor: 315669733196332695371222038726197586338743395667748570232836301621698312701573711269593494133482985383 [/CODE] 73 hours on 6 threads i7-5500U with 8 GB memory for a 7.1M matrix at TD = 70 (130 failed). Log attached and at [URL="http://pastebin.com/Eu7E3E5y"]http://pastebin.com/Eu7E3E5y[/URL] |
1 Attachment(s)
[QUOTE=swellman;459610]I'll take C211_125_112 though it could probably use a few million more relations. It's actually a 32-bit job (the 14e status page erroneously lists 30-bit).[/QUOTE]
[code] prp55 factor: 2894958758263384256363252967012997077848534107683898901 prp156 factor: 583603153855841909899834660448157737748350890318893596747370737706320077937018353767353851953199485717578645358947547556356871612853660852006097076931700623 [/code] |
[QUOTE=swellman;461478]Reserving C254_133_85 (14e).[/QUOTE]
Now in LA. 742 hrs to complete. :geek: |
C214_123_104 done
[code]
Tue Jun 20 10:53:23 2017 p98 factor: 56882218080195272165701631805309719223524269717449678298489706590039874491286167077398853249689471 Tue Jun 20 10:53:23 2017 p116 factor: 90831888050790458152657364989146359424085684377462285498221345499722937981049369437435512094722597150218279460518657 [/code] 176.6 hours on 6 cores i7/5820K for 17.04M density-136 (optimised - 138 didn't build) matrix. Log is slightly inaccessible (I am using an iPad and attaching a file is hard) will update Sunday by which point C213_142_74 should also be done |
[QUOTE=swellman;461477]Outstanding. We can play the reservations by ear - if 144_91+91^144 finishes before you leave town (which is no sure thing), feel free to grab the biggest remaining sieved job. I'm around all summer so I have more flexibility.[/QUOTE]
You're funny, finish before I leave town! A 26.3M matrix has ETA of 72 days, roughly 1 Sep. Lots of duplicate relations, only 373M unique. |
[QUOTE=VBCurtis;461642]You're funny, finish before I leave town!
A 26.3M matrix has ETA of 72 days, roughly 1 Sep. Lots of duplicate relations, only 373M unique.[/QUOTE] I did say it was no sure thing! :smile: There's plenty of other challenging jobs in the 15e queue, if you have interest and assets in Sep. Hope you have a fun vacation. |
[QUOTE=swellman;461648]I did say it was no sure thing! :smile: There's plenty of other challenging jobs in the 15e queue, if you have interest and assets in Sep. Hope you have a fun vacation.[/QUOTE]
If 148^83+83^148 can get another ~60-70MQ scheduled to get raw rels over 700M, I can get the LA started before I leave. It's a 4-node Dell C6100, for which MadPoo sent me his drawer of server DDR2 4GB sticks; two nodes have 32gb and two have 24gb, so I can run a second LA job over summer. I usually use it for my personal NFS jobs, but I've been sending half of those to the 14e queue; it's only fair I take care of some LA as thanks to MadPoo for the memory and thanks to NFS@home for sieving "my" work. |
13*2^799-1 factored
[code]prp69 factor: 209370136204278159718136227373541907648511964188507794118837216932973
prp136 factor: 1109943915176522697462718074532204131651516579142092562807170493147836778490573392205395735235274719072201312266006383970551598398998051[/code] ~78 hrs on 5 threads of a 6-core i7 to solve an 8.5M matrix at TD 120 (128 failed to build). Log at [url]https://pastebin.com/nqGhcMEa[/url] |
[QUOTE=VBCurtis;461659]If 148^83+83^148 can get another ~60-70MQ scheduled to get raw rels over 700M, I can get the LA started before I leave. It's a 4-node Dell C6100, for which MadPoo sent me his drawer of server DDR2 4GB sticks; two nodes have 32gb and two have 24gb, so I can run a second LA job over summer.
[/QUOTE] Sounds good to me. You getting 148^83+83^148 into LA successfully before leaving town seems a realistic scenario. I'll step back and wait for everything to settle out before I start another 15e job in order to give you flexibility. |
[QUOTE=Dubslow;461242]It looks like C176_104281_47 is unclaimed, so I'll take it.[/QUOTE]
[code]commencing square root phase reading relations for dependency 1 read 7267256 cycles cycles contain 20442872 unique relations read 20442872 relations multiplying 20442872 relations multiply complete, coefficients have about 539.52 million bits initial square root is modulo 69247 sqrtTime: 1846 p76 factor: 1169366970974508860820292931054527713330707639039345110066838577912788088781 p101 factor: 10693582328072654025860798420976061922085192947782044565961367375992887887303949763418416054690132597 elapsed time 175:58:23 [/code] About 12,000 consecutive bad relations were removed from the log for size. [pastebin]d0EKwg9M[/pastebin] I'll take C164_12400411646533_17 if no one else has claimed it? I'm not sure the 14e page has been updated in a while (at least my previous job is still listed there as incomplete). |
C195_148_98 factored
~355 hours (after a few restarts) to solve a 22.5M matrix, using -t 4 w/ TD=140.
[CODE]p75 factor: 220938197725753424774816741177015387792893612279558726320079925271705423701 p121 factor: 3513405926243250467186056508926109139775953982173185067537096361980251444860833030724775300446265047609501168832997893221[/CODE] [url]https://pastebin.com/fzWrEYXy[/url] |
[QUOTE=Dubslow;461785]
I'll take C164_12400411646533_17 if no one else has claimed it? I'm not sure the 14e page has been updated in a while (at least my previous job is still listed there as incomplete).[/QUOTE] [code]reading relations for dependency 5 read 2603460 cycles cycles contain 8821348 unique relations read 8821348 relations multiplying 8821348 relations multiply complete, coefficients have about 485.11 million bits initial square root is modulo 506363369 sqrtTime: 5522 p68 factor: 13463755899043165878733461047409670653626965058287209025054082309489 p97 factor: 3610957597862195892992215249949270949813588750241210253972434052057259224615006478296401320670049 elapsed time 27:09:45 [/code] [pastebin]xhqSHBnB[/pastebin] |
Going back a few pages, I don't think anyone's claimed C207_127_103 have they? I'll start downloading, please alert me if anyone's started it.
|
C213_142_74 done
1 Attachment(s)
[code]
Sat Jun 24 23:21:50 2017 p94 factor: 1377735902892459532366562914647077269231490719940922938746516450606547775419436969557209802177 Sat Jun 24 23:21:50 2017 p120 factor: 119254575837279507308198430521767016815169736963677528725391920507276670957401532972343444767643747793822497270979804733 [/code] 84.6 hours on 6 cores i7/5820K for 12.26M density-152 matrix (couldn't build at density 154) Log attached and at [url]https://pastebin.com/zg7gh627[/url] (I am posting at 4:30am because I had a seven-hour westbound flight yesterday ...) |
C214_123_104 details
1 Attachment(s)
Log attached and at [url]https://pastebin.com/EHEjnynB[/url]
|
I think I have the reservations and completed-work synchronised now
Taking C206_129_95 (ETA morning of July 6) |
[QUOTE=VBCurtis;461659]If 148^83+83^148 can get another ~60-70MQ scheduled to get raw rels over 700M, I can get the LA started before I leave.[/quote]
I have stuck another 100MQ on C195_148_83 |
C205_125_117
Reserving C205_125_117 for postprocessing. Thank you.
|
Taking C208_138_73
ETA July 3rd Revised ETA July 6th, because I forgot to restart it to run over the weekend when my machine crashed late on Friday night |
[QUOTE=VBCurtis;461659]If 148^83+83^148 can get another ~60-70MQ scheduled to get raw rels over 700M, I can get the LA started before I leave. It's a 4-node Dell C6100, for which MadPoo sent me his drawer of server DDR2 4GB sticks; two nodes have 32gb and two have 24gb, so I can run a second LA job over summer.
[/QUOTE] I'm very curious to see if a) you can now get 148_83 into LA with your hardware; and b) how long it will take to solve it. Rooting for success! What is the minimum number of relations for a 33-but job? I had assumed it was north of 900M, but you often seem to solve 31 and 32-bit jobs with less than the typical number of relations so I'm rethinking the issue. Good luck whatever the final number is. |
[QUOTE=swellman;462209]I'm very curious to see if a) you can now get 148_83 into LA with your hardware; and b) how long it will take to solve it. Rooting for success!
What is the minimum number of relations for a 33-but job? I had assumed it was north of 900M, but you often seem to solve 31 and 32-bit jobs with less than the typical number of relations so I'm rethinking the issue. Good luck whatever the final number is.[/QUOTE] I solved 13*2^864-1 with 15e/33LP and needed 620M raw relations to build a TD=124 matrix. But that was SNFS-264 or so, much smaller than this candidate. 13*2^837-1 was also 15e/33LP, took 557M raw relations for ~SNFS-256; I believe that one would have been quicker as 32LP, but not by much; that factorization provides a reasonable lower bound for # of relations needed, I think- that matrix built at TD=116. 13*2^827-1, a C251, built a TD=116 matrix with 326M raw relations with 15e/32. Two data points with small inputs isn't enough to fit a regression for # of relations vs input size; I look forward to seeing if 750M is enough for this rather large task. I'll wait a day or two to try filtering, since I may need every last relation. If I accept that 8 digits of SNFS diffculty requies 60M more relations (per the two above data points), 750M relations should be sufficient for SNFS-279ish; well short of this GNFS-194. Maybe I'll need 825M to get TD above 120... I plan to gather more 33LP data by running some snfs-250 through NFS@home as 14e/33; I think 550-575M rels will work, and wish to demonstrate this. |
I have done three 33-bit GNFS jobs above 190 digits.
[code] 192.3 15e 971.4Mr/680.3Mu 39.1M 1140hrs i7/5820Kx6 td=128, td=144 failed 193.1 15e 873.0Mr/659.5Mu 34.5M 706hrs i7/4930Kx6 td=120 196.8 16e 885.4Mr/645.7Mu 33.4M 852hrs K10/1900x24MPI td=120 [/code] I would be quite surprised if you can make a usable matrix with less than 850M raw relations at this level; accordingly I have pushed Q_max up to 900M. I have also done some SNFS jobs with 33-bit large primes [code] 272.8 15e 1064.4Mr/790.2Mu 44.2M 1550hrs i7/5820Kx6 td=160 278.9 15e 985.4Mr/742.4Mu 42.8M 1290hrs K10/1900x48MPI td=136, td=144 failed [/code] I think it would be useful to record the size as well as the density of the matrices you obtained in previous work |
[QUOTE=swellman;462058]Reserving C205_125_117 for postprocessing. Thank you.[/QUOTE]
ETA is 19 July. |
I'll take C215_145_52 next.
|
Taking C198_149_50
|
[QUOTE=VBCurtis;461659]If 148^83+83^148 can get another ~60-70MQ scheduled to get raw rels over 700M, I can get the LA started before I leave. It's a 4-node Dell C6100, for which MadPoo sent me his drawer of server DDR2 4GB sticks; two nodes have 32gb and two have 24gb, so I can run a second LA job over summer.
[/QUOTE] Any progress on 148^83+83^148? |
[QUOTE=swellman;462740]Any progress on 148^83+83^148?[/QUOTE]
remdups left 703M unique relations; I started filtering last night. I'll check on it tonight and report on a matrix size (or that density 128 failed). |
[QUOTE=VBCurtis;462748]remdups left 703M unique relations; I started filtering last night. I'll check on it tonight and report on a matrix size (or that density 128 failed).[/QUOTE]
Good luck! Might I ask what command you used for remdups with this job? I haven't yet run remdups on my 32 Gb machine and am not sure of the optimal parameters. |
I chose 4000 for DIM, and I think the stats mentioned peak dimension used was 2500 or 2600. So, anything 3000+ should work fine; I'm new to remdups, only used it a half-dozen times or so.
|
C206_129_95 done
1 Attachment(s)
[code]
Wed Jul 5 20:42:24 2017 p79 factor: 2353306656022930028910635861744663284129653692693621106551161170628133019327781 Wed Jul 5 20:42:24 2017 p128 factor: 35158938100505702216835191053339927789653957737747925804813852939832755829649106582970407484313288730036287017127163544287828533 [/code] 194.7 hours for 17.80M matrix at density 138 (density 140 didn't work) on 6 cores i7/5820K Log attached and at [url]https://pastebin.com/f0isNqny[/url] |
[QUOTE=VBCurtis;462748]remdups left 703M unique relations; I started filtering last night. I'll check on it tonight and report on a matrix size (or that density 128 failed).[/QUOTE]
Cough, the matrix built at density 128. Cough, it's 50.0M. Cough, 9100 hr ETA. Trying at density 132. Do we care if this takes an actual year? |
[QUOTE=VBCurtis;462793]Cough, the matrix built at density 128.
Cough, it's 50.0M. Cough, 9100 hr ETA. Trying at density 132. Do we care if this takes an actual year?[/QUOTE] Better ask Greg to run it on the cluster. |
C208_138_73 done
1 Attachment(s)
[code]
Thu Jul 6 04:09:13 2017 p82 factor: 9736267685286991604296590214686121784068532407049693617052452318676898365463820219 Thu Jul 6 04:09:13 2017 p126 factor: 994443955602334155737530937242373089041045893579192732226709889022286276263920917709561671880531487612825000432373513246463893 [/code] About 117 hours on 7 threads E5-2650v2 for 15.06M matrix at density 130 (density 140 didn't work). Log attached and at [url]https://pastebin.com/zzGHtnuc[/url] |
[QUOTE=VBCurtis;462793]Cough, the matrix built at density 128.
Cough, it's 50.0M. Cough, 9100 hr ETA. Trying at density 132. Do we care if this takes an actual year?[/QUOTE] Any luck with higher TD? What are your feelings on a job duration of this magnitude? I'm willing to try it if you decide to pass, but I doubt my hardware is any faster than yours (the opposite I predict). :max: |
[QUOTE=swellman;462918]Any luck with higher TD? What are your feelings on a job duration of this magnitude?
I'm willing to try it if you decide to pass, but I doubt my hardware is any faster than yours (the opposite I predict). :max:[/QUOTE] I filled the disk with a retry; I had moved the matrix files for TD 128 before trying 132, didn't notice they were 30+GB. I tried again last night at 136 with sufficient disk space, should have info tonight. The log indicated ~4M excess cycles at 128, which is why I thought 136 might work. I think we can get creative about transferring the matrix files, and each run this for 5-6 months to get it complete. E.g. I'll run it until December, we figure out a way to transfer the files, and you complete the job? I think we should ask Greg to run the next GNFS, the one that's 1 digit bigger. |
I'm game, though I suspect we may encounter some issues in this process (e.g. matching superblock size, etc). Hoping others here can offer advice. Maybe a higher TD will have a significant positive effect on ETA.
FWIW, I'm trying to procure a reburbed HP Z820 with enough memory and at least partial processor power to run a 33-bit job (which will still take months to run). But I don't have it yet - waiting on funding and domestic management buy-in. |
Oh, didn't think of superblock/architecture details. The CPUs are core2-era, which likely explains the leisurely ETA (dual 4-core Xeons @2.13, I think). I don't have any other machines with more than 16GB, alas; I'd be more confident if I could start the LA on the same architecture you'd be finishing it on.
I picked up a Z600 (dual 6-core Xeon) from ebay this winter, am very very happy with the purchase. 600 series has 6 DDR3 slots, 3 per CPU socket; 800 series has twice that. I do wish I'd had a crystal ball to warn me DDR prices would double 3 months after purchase, so I could have loaded it better than 6x2GB sticks. So, it doesn't do NFS@home work. A Z820 with a pile of 4GB sticks is a very capable machine! |
[QUOTE=VBCurtis;462920]I tried again last night at 136 with sufficient disk space, should have info tonight. The log indicated ~4M excess cycles at 128, which is why I thought 136 might work.[/QUOTE]
TD 128 built a 50.05M matrix with weight of sparse part 5.79G. TD 136 built a 49.2M matrix with weight of sparse part 6.0G. The ETAs match within a few dozen hours (136 is less than 1% higher after 100k dimensions, but it is running now so I left it). ~9000hr. If an expert could weigh on on the chances of transferring the matrix files for a partner to complete this monster a few months from now, I'd like that reassurance. |
I'm afraid I can't be very cheering: a little while ago I tried running the matrix preparation on one machine and moving to another, and didn't achieve success: see [url]http://mersenneforum.org/showthread.php?t=22176[/url]
('pumpkin' has defective memory, but I'm pretty sure 'butternut' doesn't) I am planning on getting an i9-7900X early in 2018, but I expect you to have found some way to finish the linear algebra by then; at the moment it's hot enough that the top two computers in my pile of ex-Facebook machines in the shed keep spontaneously turning themselves off, which is unpromising for doing MPI on that cluster. |
Taking C217_143_51: C162_11040_10042 I will leave for someone with a more bijou steamroller.
|
[QUOTE=fivemack;463077]
I am planning on getting an i9-7900X early in 2018, but I expect you to have found some way to finish the linear algebra by then; at the moment it's hot enough that the top two computers in my pile of ex-Facebook machines in the shed keep spontaneously turning themselves off, which is unpromising for doing MPI on that cluster.[/QUOTE] Have you considered the AMD ThreadRipper(16C/32T)? |
[QUOTE=pinhodecarlos;463080]Have you considered the AMD ThreadRipper(16C/32T)?[/QUOTE]
Yes, though I am more interested in the prospect of AVX-512; I expect AMD benchmarks to be available by the end of this year and they will play a part in my decision. |
[QUOTE=VBCurtis;462973]TD 128 built a 50.05M matrix with weight of sparse part 5.79G.
TD 136 built a 49.2M matrix with weight of sparse part 6.0G. The ETAs match within a few dozen hours (136 is less than 1% higher after 100k dimensions, but it is running now so I left it). ~9000hr. If an expert could weigh on on the chances of transferring the matrix files for a partner to complete this monster a few months from now, I'd like that reassurance.[/QUOTE] After discussions with VBCurtis via PM, he is going to abandon this job. I will attempt it on my hardware. The huge ETA combined with the potential difficulties/impossibilities of executing a relay LA on two different rigs were not appealing to either of us. Hoping this factorization will finish by winter! |
13*2^800-1 factored
[code]prp81 factor: 160495860510299089900884881150565381014314892318013981084372922252235562566936671
prp109 factor: 1827332725815904515769354793594180789579187639474678166920356073397044898104863056938970852957550120922053489[/code] Q=15M to 120M produced 385M rels; I downloaded 380M of them, yielding 322M unique. TD 136 failed, but TD 128 produced an 8.8M matrix, which took 67 hrs to solve on 6 threads of a not-idle i7-5820. Log: [url]https://pastebin.com/0i2SZLUv[/url] |
C198_149_50 complete
1 Attachment(s)
[code]
Wed Jul 12 02:14:36 2017 p63 factor: 541684712662784107865620586576703470646926294644158189450179227 Wed Jul 12 02:14:36 2017 p135 factor: 656960557069151588584995162383011277660705889083339105189749606002541691335451415407892037924827117775856772433500663369818477002938683 [/code] 122.6 hours for 15.15M matrix at density 134 (142 didn't work) on 7 threads E5-2650v2. Log attached and at [url]https://pastebin.com/Xjwsw2b4[/url] |
Reserving C162_11040_10042 (14e). I won't be able to start it until July 27.
|
I'll take C216_143_53 next.
(Should start it when C215_145_52 finishes up on Monday.) |
[QUOTE=RichD;463322]I'll take C216_143_53 next.
(Should start it when C215_145_52 finishes up on Monday.)[/QUOTE] Crap, due to a power outage C215_145_52 won't finish until early Tuesday. I couldn't get to the machine for several hours. Therefore, I can't post the results or start the next job until I get back next weekend. If someone wants to start C216_143_53 earlier, feel free to do so. Else I will start it as soon as I get back. I already have it downloaded but can't run two of these jobs simultaneously. |
1 Attachment(s)
If you are using Linux etc this script could be helpful.
Rename to waitfor.sh and run as follows: [code] waitfor.sh 12345;next_command -parameters [/code] Replace 12345 with the PID of the job that's running now. When that stops waitfor.sh will soon end and the command after the ; will run. Chris |
Reserving C190_179743_47, once it's done sieving later this week. Hard to tell it's actual status - the [url=https://escatter11.fullerton.edu/nfs/crunching.php]estimated pending relations field[/url] seems a bit wonky - but I believe a matrix can be built.
In other news, the three big NFS jobs I currently have reserved should all finish this week. |
Reserving C194_69655517_31
C217_143_51 will be a bit late (probably Thursday) because I forgot to check it was running before going home for the weekend. |
C194_69655517_31 needs significantly more sieving (399189213 relations of which 313468110 unique produces a 25.9M matrix at density 70) so I've added another 25% to the range.
|
C205_125_117
1 Attachment(s)
C205_125_117 factored
[code] prp75 factor: 100885093929626251635775317267736996137727439630450175639304563467156046629 prp131 factor: 12701496319468862837540492499469260574619899842603552199357289643382259003486793559772776562561562290594514802351579644894083619029 [/code] |
C190_179743_47 has managed to build a matrix at TD=70. 648 hrs ETA. I'll just grind it out.
|
[QUOTE=swellman;463788]C190_179743_47 has managed to build a matrix at TD=70. 648 hrs ETA. I'll just grind it out.[/QUOTE]
No, no, no. Ask Tom to push more Qs for it. In the mean time you can take C216_143_53 since I won't be back home to start it until the weekend. Chris2be8 posted a nice utility but I didn't have time to test it before my departure. I do see other uses for it and possibly post-processing in the future. |
[QUOTE=RichD;463794]No, no, no. Ask Tom to push more Qs for it. In the mean time you can take C216_143_53 since I won't be back home to start it until the weekend.
Chris2be8 posted a nice utility but I didn't have time to test it before my departure. I do see other uses for it and possibly post-processing in the future.[/QUOTE] I get what you are saying but how much time will be saved with more Q and a tighter matrix (say TD=120)? A few days or is it a significant % saved? You keep C216_143_53 - it's only a few more days to the weekend. I'm a bit overcommitted as it is. But I'll keep my reservation for C190_179743_47, however it plays out. 391896643 raw relations with 310335082 unique produces a 26M matrix @TD=70. I've paused this job for now. |
[QUOTE=swellman;463795]I get what you are saying but how much time will be saved with more Q and a tighter matrix (say TD=120)? A few days or is it a significant % saved?
You keep C216_143_53 - it's only a few more days to the weekend. I'm a bit overcommitted as it is. But I'll keep my reservation for C190_179743_47, however it plays out. 391896643 raw relations with 310335082 unique produces a 26M matrix @TD=70. I've paused this job for now.[/QUOTE] From memory I've been running about 380-400 hours on 32-bit jobs with TD=120-130 on a Core-i5 2500K with four threads. |
C217_143_51 done
1 Attachment(s)
[code]
Thu Jul 20 01:20:05 2017 p84 factor: 315081310393501878216040828192206404963398017134567669168325803739898920745212114911 Thu Jul 20 01:20:05 2017 p134 factor: 12211655599656173996660665325606432763190132541118112927283296244519068892798147931671692296548735071391014776103600989212452200312907 [/code] About 75.8 hours on 7 threads E5-2650v2 for a 12.29M density-142 matrix (not enough relations for density 146). Log attached and at [url]https://pastebin.com/Fdgrz6FC[/url] |
C254_133_85
1 Attachment(s)
C254_133_85 factored
[code] prp121 factor: 2522479326958335193262464788832784044724241938144752048623231116420930005246613749022425952915031239001448310677586692919 prp133 factor: 9661921682976101386877076108410549657876378180954074977766603179286888755622058010114462730030563197789881940277972689815469022912371 [/code] |
C215_145_52 factored
A few problems while processing this number.
Guessing about 260 hours to solve a 17.7M matrix using -t 4 with an attempt at TD=128 but a retry was preformed. Not sure what the final figures are since there was at least one power outage. Anyway, the results are: [CODE]p69 factor: 811884529212156121214196053224656649627421583977579650693247575913281 p72 factor: 410052737060507499868085991922786692682851786985574701364922782002592497 p74 factor: 85660759058095833197360874851652786516606665723271609061872559260331035143[/CODE] [url]https://pastebin.com/yU2c5BXU[/url] |
C191 from AS 3408 step 1668
1 Attachment(s)
Finally factored. Reported to fdb, now a C186 blocker. Log attached.
[code] p86 factor: 47128585108733309220305950275981748827064241088820215576424964728151368836716731278797 p105 factor: 625117287849127517677481176326464356064314285017067719232733781151482927049946251298164613294418106954579 [/code] |
| All times are UTC. The time now is 20:46. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.