![]() |
C184_4496 built at TD=128. ETA is ~227 hours.
|
What are the weird lettersnumbers in comments of the lasieved queue, that almost look like some sort of checksum?
It starts at entry 577 Youcef Lemsafer: p49 * p62 * p80 (ECM to 2t50 by yoyo@home and Lionel Debroux) [B][nzaMusZs][/B] What is the [nzaMusZs] part supposed to mean? |
I think this funny game should be dismissed. Nfs@Home website is not mersenneforum. Please stop messing around.
|
[QUOTE=pinhodecarlos;417350]I think this funny game should be dismissed. Nfs@Home website is not mersenneforum. Please stop messing around.[/QUOTE]
I don't believe it is a game, it looks more like a checksum or hash? |
It is a Pastebin link for the log files. (Not the whole link. Just the unique part.)
:mike: |
[QUOTE=Xyzzy;417414]It is a Pastebin link for the log files. (Not the whole link. Just the unique part.)
:mike:[/QUOTE] Would it be difficult to make that a link to the log at pastebin instead of just a text reference? This would be more useful and avoid confusion. :smile: |
C205_146_33 Factors
[code]prp98 factor: 42385330166452601710773828105723858609811307869027139366629509703607123662887877881888401677407269
prp107 factor: 78183476826916122279339496789298279582236292739150574668142346694543229400545784067293767633953021799152953 [/code] |
[QUOTE=Mini-Geek;417427]Would it be difficult to make that a link to the log at pastebin instead of just a text reference? This would be more useful and avoid confusion. :smile:[/QUOTE]It wouldn't be a clickable link. (Would that be okay? The lines are already pretty long.)
Perhaps the underlying page can be modified to have a link field? |
[QUOTE=Xyzzy;417444]It wouldn't be a clickable link. (Would that be okay? The lines are already pretty long.)
Perhaps the underlying page can be modified to have a link field?[/QUOTE] Hm..yeah, might be better not to include a non-clickable link, between the length and non-clickyness. Ideally, the page could be modified to have links there (e.g. by a separate field in whatever DB the data comes from). As an alternate solution/workaround, a script could easily turn it into a link on the client side. If you have GreaseMonkey, [URL="https://gist.github.com/Mini-Geek/f43daf88c4c347554736/raw/d90555d1c9328e7939a4acfa8ce3d8bcd4a7223b/NFSlinks.user.js"]here's a script[/URL] I wrote that turns the code into a "(log)" link. A similar script could be worked into the page (preferably without jQuery like my script has...it's a bit overkill). It's a little messier, and won't work if the client has JS disabled, but it's an idea. At a minimum, the "Explanations" section could tell you what the codes mean. |
W_793 Factored
[code]
prp89 factor: 52578006040192355623775994322704245125311412836412473633875942682459163325370741398057851 prp119 factor: 83688093368640500497160950082616120848195687278544409391289351088529301697881400354668318232390167964276103853857931851 [/code] |
EMG-C188 factored
1 Attachment(s)
[code]
Sat Nov 28 13:19:23 2015 p91 factor: 1455229648108768594552694966205142453019168989838313852033836936726828258873327877743335007 Sat Nov 28 13:19:23 2015 p98 factor: 60531294671960669735077411626867118292354916934357577613958209636477806174025280318238132795431621 [/code] About 212 hours on 7 threads Xeon E5-2650v2 for a 21.1M matrix (the machine had to be rebooted a few times) Log attached |
Taking
[B]C154_P182_plus_1 [/B][B]C155_P209_plus_1 C153_P233_plus_1[/B] for post-processing. They probably take only about a day each. |
You have decent horsepower, so I'd say much less than a day for each of them, though one day for all three might be a stretch.
|
W_790
1 Attachment(s)
[CODE]prp88 factor: 7672236958518363816567697109832643079838636749631210749387737419588642636816013574367653
prp100 factor: 6562282240936037358815422484225511323795135516019451448000923739689561470840832334996806359588917617[/CODE] |
C184_HP2_4496
The C184 blocking HP2(4496) splits as:
[CODE]prp83 factor: 18629234615651511444939975064252892061546608854100819750912782705794351301276248439 prp101 factor: 90074244593568724732372840988999455713334654010427512152214073432054138168692117445761801991568405847 elapsed time 110:46:38[/CODE] Elapsed time is misleading as there was a power outage midway through. Actual total time is somewhere on the order of 120-140 hours. Matrix was 15331633 x 15331859 with TD=128. |
Running 4261-67; ETA is 21 hours, but that's the weekend so I won't see the answer until Monday.
Also taking 2269-67 and 2789-67 which should fit in over the weekend. |
What are best parameters to run MPI version of msieve on single computer? Have Dual Xeon E5-2620, so 6cores/12threads * 2. I compiled msieve v1.52 with OpenMPI 1.8.1 and newest GMP 6.1.0.
I tried MPI version on C165 GNFS-job with many different options (-bind-to-core/-bind-to-socket, -bycore/-bysocket, -cpu-set, etc, running with and without taskset command) and the best what I could receive was 60 hours (taskset -c 0-11 mpirun -np 12 -bind-to-core msieve -t 12 -nc2 2,6). Running without MPI I got much better results: taskset -c 0-11 msieve -t 12 -> 36 hours (running only on one cpu) taskset -c 0-5,12-17 msieve -t 12 -> 40 hours (running on both cpus) msieve -t 12 -> 43 hours (without taskset command) taskset -c 0-5 msieve -t 6 -> 63 hours (only 6 threads) I'm disappointed with these results, what's I'm doing wrong? |
Try, perhaps with a -bysocket if it helps,
mpirun -np 2 msieve -nc2 1,2 -t 12 |
On a 48-core (4 sockets x 2 chips per socket x 6 cores per chip) Opteron machine I found that it was very helpful to have a 'numactl -l' in the command line, as well as the taskset, to ensure that the memory was allocated on the node on which the process was running. I got mpirun to run a script which contained a taskset command, rather than trying to taskset the mpirun itself - see next post.
I am slightly surprised that you're finding -t12 faster than -t6 on a hyperthreaded system, I should redo that measurement with the next linear algebra job I run. |
For the two-layer approach I did something like
[code] mpirun -n 8 run.2,4.6.sh [/code] where run.2,4.6.sh was [code] msieve_real='/home/nfsworld/msieve-svn-again-mpi/trunk/msieve -v' CPUL=$[6*$OMPI_COMM_WORLD_RANK] CPUR=$[6*$OMPI_COMM_WORLD_RANK+5] taskset -c $CPUL-$CPUR numactl --cpunodebind=$OMPI_COMM_WORLD_RANK -l $msieve_real -t 6 -nc2 2,4 [/code] |
[QUOTE=unconnected;418280]taskset -c 0-11 mpirun -np 12 -bind-to-core msieve -t 12 -nc2 2,6[/quote]
I'm surprised that worked at all in as little as 60 hours; it causes mpirun to start up 12 copies of sieve [I]each of which[/I] tries to use twelve threads, so I'd have thought you'd see the machine load average going into the hundreds. |
results C153 C154 C155
2 Attachment(s)
[B]C153_P233_plus_1[/B]
[code] prp57 factor: 976091661440498273311588215450771879582220118969346171503 prp96 factor: 430335904840578306669039776242416581629866475129170214872930601654724618834763764783224209970307[/code] matrix is 4140851 x 4141076 (1976.1 MB) ------------------------------------------------------------------------ [B]C154_P182_plus_1[/B] [code]prp71 factor: 27277576813606975571844943309761673123428776562927433726921949754679613 prp84 factor: 171066199323584838258211112753849127051765672129864072245708787815508766170527186951[/code] matrix is 4358959 x 4359184 (2091.8 MB) ------------------------------------------------------------------------ |
C155
1 Attachment(s)
[B]C155_P209_plus_1[/B]
[code]prp74 factor: 10346196175428152102773520718167922375362025229971136961917614228903776537 prp82 factor: 4105695312892115636553366234774522477274854964663968305029273169198918335942867279 [/code]matrix is 4502743 x 4502968 (2160.1 MB) ------------------------------------------------------------------------ All filtered with TD=130, LA took 10-12h each, logs attached (couldn't add 3 attachments to one post). |
For the 14e numbers, I've added a pastebin field for the unique part of a pastebin url containing the log. If that is populated, on the status page the comment becomes a link to the corresponding pastebin. :smile:
Edit: Now 15e has been given the same treatment. |
:ttu:
|
2 Attachment(s)
My msieve logs for pastebin url.
|
Much better and cleaner now.
Thanks Greg! |
4261-67 done
1 Attachment(s)
[code]
Sat Dec 5 09:09:56 2015 p68 factor: 99713931517486444010746977398434147387312495439028057685457930885581 Sat Dec 5 09:09:56 2015 p115 factor: 6337366248063930626919828317773198377075808129453921717685844042320912353140083589239167509613707066272579659320439 [/code] 34.4 hours on 7 threads E5-2650v2 for 9.04M matrix at density 120. Log attached. |
2269-67 done
1 Attachment(s)
[code]
Sat Dec 5 17:44:54 2015 p91 factor: 2132511512443414333678564691107777973475861603076069009971937493950396472549072567490874627 Sat Dec 5 17:44:54 2015 p129 factor: 532873974251185874870260239618395036338610187500129178927301700770853208890035150980092485967757603488674407284211962191930161857 [/code] 8.3 hours for 4.63M density-120 matrix on 7 threads E5-2650v2. Log attached. |
The best TD for L1397 we can get is 108. Doesn't that mean the job is going to run significantly longer?
[code]Sat Dec 5 11:35:46 2015 setting target matrix density to 140.0 Sat Dec 5 16:00:06 2015 setting target matrix density to 130.0 Sat Dec 5 20:01:24 2015 setting target matrix density to 125.0 Sun Dec 6 09:13:49 2015 setting target matrix density to 120.0 Sun Dec 6 13:49:42 2015 setting target matrix density to 116.0 Sun Dec 6 20:20:04 2015 setting target matrix density to 112.0 Mon Dec 7 06:17:51 2015 setting target matrix density to 108.0[/code] (We are still running 115!+1, which ended up with a TD of 116.) :mike: |
[QUOTE=Xyzzy;418506]The best TD for L1397 we can get is 108. Doesn't that mean the job is going to run significantly longer?[/quote]
Longer, but not significantly longer (in particular, definitely not enough longer to make up for the humans-editing-files cost involved in re-opening the sieving even if it doesn't end up right at the back of the queue again) Have you got an ETA for 115!+1 ? |
[QUOTE=fivemack;418511]Have you got an ETA for 115!+1 ?[/QUOTE]It should be done on or about December 14th.
We have found that our system's memory bandwidth limits us to using only three threads for optimal performance. (Hopefully we are not holding anything up with our glacial progress!) :mike: |
2789^67-1 done
1 Attachment(s)
[code]
Tue Dec 8 16:41:27 2015 p65 factor: 52319708885249431043705448772896594826046601248046896873868513419 Tue Dec 8 16:41:27 2015 p161 factor: 17837750601953663787657057014561378748766403974759587233367011267996725609141518014034601025231393060936817374861578853716489164245294521433680543870685215627921 [/code] 29 hours for an 8.43M matrix on 7 cores Xeon E5-2650v2, not quite 100% otherwise idle. Log attached. Reserving 7321-61 |
Reserving [B]C155_P172_plus_1[/B]
|
Small delay on 3037-67
I have had to put it back in for more sieving; 97.3M relations with 79.5M unique happened to give 85.2M unique ideals which isn't enough for filtering to work. Probably will be able to start it on Friday and it'll be done over the weekend.
Also reserving C161_P235_plus_1 and putting it back in for more sieving. |
7321-61 done
1 Attachment(s)
[code]
Thu Dec 10 07:24:58 2015 p59 factor: 45868066129589875107571532013872682868092140317207831067923 Thu Dec 10 07:24:58 2015 p172 factor: 2675737295247574819499131224359306794078395666577248150042811319894796482703258693994436203210724157088842039717303226325110885323371495827882707572573091946968196202178187 [/code] 33.6 hours for 9.04M weight-100 matrix on 7 cores Xeon E5-2650v2. Log attached. |
Is Dmitry Domanov aware that the numbers he proposed are ready for post-processing?
I'd like to reserve [B]1481_73_minus1[/B] |
[QUOTE=VictordeHolland;418847]Is Dmitry Domanov aware that the numbers he proposed are ready for post-processing?
[/QUOTE] Yes. :smile: C165: 8.6M matrix built with TD=130 takes 74 hours on Xeon E5-2620 with 12 threads [CODE]prp60 factor: 608582902145634315338470221192474523030619174936677567883719 prp106 factor: 1054550296081921268138102201181309685620883441135859389474873173216704471613847997161565988131751596396077 [/CODE]C171: 12.6M matrix built with TD=130 takes 122 hours on Xeon E5-2620 with 12 threads [CODE]prp70 factor: 1941340861375784809017942439615372734407453715478372202535213287961083 prp102 factor: 510542841726947172131836848302103429631837075176126688027581259524967935645243719635933823119275357569 [/CODE]Note: LA for both numbers was started simultaneously which decreases perfomance a lot. |
C155_P172_plus_1 result
1 Attachment(s)
[B]C155_P172_plus_1[/B]
[code]prp55 factor: 1262562858408992800428867338378430551967233554382410981 prp101 factor: 21697824085743371372469020640657301318547323841169042223962885256123964861774717098648145373746642567[/code]4.4M matrix with TD=130, log attached. |
Two done over the weekend
2 Attachment(s)
3037^67-1:
[code] Sat Dec 12 00:50:33 2015 p101 factor: 55661491609439748785222827839284809354405363711427436747346052097372392050714246101365976046351774883 Sat Dec 12 00:50:33 2015 p127 factor: 4636520357535197815957448745652850446578572287137100170697131439178703342323931795119740919120662834629979389440205090843878881 [/code] 32.4 hours for 8.87M density-110 matrix on E5-2650v2 -t7 C161_P235 [code] Sun Dec 13 07:51:50 2015 p61 factor: 2057615808601105867477509319540912079794595341210921188591047 Sun Dec 13 07:51:50 2015 p100 factor: 4907089133128414979770030148182501984276112496043076934790578417094946450078090036204367425268439879 [/code] 20.4 hours for 7.98M density-70 matrix on E5-2650v2 -t7 Two logs attached Running 4001-67 |
ETA 1481_73_minus1
Filtering of 1481_73_minus1 took longer than expected. I've tried filtering with different target densities. TD=130,120 and 110 failed, 70 built a 7.9M matrix and I've now settled for TD=100 which produced a 7.0M matrix.
ETA: [code] linear algebra completed 372326 of 7047945 dimension (5.3%, ETA 29h 5m)[/code] |
4993_61_minus1
1 Attachment(s)
[CODE]p105 factor: 164336231460105942388742398481370721038095523614225538660369568756744879501486782309248904455000737340233
p113 factor: 43194087442327000502773915547708075208172135399211093129607226448815337306897934732157183551450460726452559939519[/CODE] |
115!+1
1 Attachment(s)
[CODE]p86 factor: 77198710203968379849983982959790232761720189288516366113471961990287650166210456139137
p103 factor: 3789044772593431232515180869158859091568056944142111351549887600155013573262382213821164607845611539073[/CODE] |
1481_73_minus1 factors
1 Attachment(s)
[B]1481_73_minus1[/B]
[code]prp61 factor: 4352062441332548294398184687143563450494955252275036880750451 prp166 factor: 1495184611138708583472481598147317824803516034184600169056727793936433151790432063847157688300777822499373832335477142920694106271411995968037295133999334398545830391 [/code] Log attached. |
4001^67-1 done
1 Attachment(s)
[code]
Wed Dec 16 13:32:08 2015 p78 factor: 131111057430132915609670164300408311079195795306005239682289352312979588496053 Wed Dec 16 13:32:08 2015 p159 factor: 156978925116950077405370622322575730796675460547553584879273933674322748102434340221418332380466358460037630016402016427736459570445417823851677173745535933331 [/code] 45.5 hours on somewhat-loaded E5-2650v2 for 8.8M density-120 matrix |
Taking 3853_67_minus1
|
C222_117_100b
I can take care of [B]C222_117_100b[/B].
|
I'll take C164_3408_1595.
|
Finally got matrix built for 142^141+141^142, now in LA. Should be factored on or about Jan 20.
|
3853^67-1 done
1 Attachment(s)
[code]
Sat Dec 19 01:50:20 2015 p74 factor: 16694496988384378883435130395075555364495175454017235109090124394509234879 Sat Dec 19 01:50:20 2015 p112 factor: 2136180266804212577144649364923737757939274190371082165985972947594685559382490172205625430878564448411797220189 [/code] 31 hours for 8.38M density-120 matrix on E5/2650v2 -t7. Log attached. |
Taking W_749.
|
GW_3_474 done
1 Attachment(s)
[code]
Sun Dec 27 10:17:35 2015 prp103 factor: 3300313351291788221821570209342688546968862339690791140914246729746566203285115677561793606793245332877 Sun Dec 27 10:17:35 2015 prp120 factor: 128619934491195735229945375602171651694124194992056327270477043193010775326284863910859454357881657374000661432455833431 [/code] 9.9 hours for 5.1M matrix on i7-4930K -t6; wildly over-sieved for a number this small. Log attached. Target_density=130 failed to make a matrix ('Sat Dec 26 17:45:04 2015 matrix needs more columns than rows; try adding 2-3% more relations') but td=120 worked. I'll be starting F1297 when the i7-5820K machine arrives on Tuesday. |
12_226_plus_7_226
1 Attachment(s)
[CODE]p90 factor: 109627660849176738661801173345265464085101112781604570770691274565853908369887661133139273
p145 factor: 4036226653073711192985805697940397008439710128238021151300534282077874197139137188623805282752029534804565098862485012458739181953692546332984213[/CODE] |
C164_3408_1595 Factored
[url=http://www.factordb.com/index.php?id=1100000000805118485]p76*p89[/url]
|
1 Attachment(s)
W_749 factored: 5.5M matrix on Dual Xeon E5-2620, 12.5 hours with MPI version on msieve. Thanks frmky and fivemack for their suggestions. Fastest non-MPI run (taskset -c 0-11 msieve -t 12) was slightly over 14 hours. Log attached.
[CODE]prp102 factor: 670910310736076688449761796815133467227494362392130282166565669537058628904634713015865313661187472759 prp108 factor: 201731657233307602088738082846062984847856959150903458646930776241969933873976830711927888142905602484980759[/CODE] |
I'll take GC_3_475 next.
|
C_748
1 Attachment(s)
[CODE]p101 factor: 18143578957112903024219208128943064170091130559885591410019364935058555931716545645750762965103877397
p111 factor: 844005220452601906434351028899191443018391831514749834501830373919894736659932478555984863889143657410223972729[/CODE] |
[QUOTE=unconnected;420418]W_749 factored: 5.5M matrix on Dual Xeon E5-2620, 12.5 hours with MPI version on msieve. Thanks frmky and fivemack for their suggestions. Fastest non-MPI run (taskset -c 0-11 msieve -t 12) was slightly over 14 hours. Log attached.
[CODE]prp102 factor: 670910310736076688449761796815133467227494362392130282166565669537058628904634713015865313661187472759 prp108 factor: 201731657233307602088738082846062984847856959150903458646930776241969933873976830711927888142905602484980759[/CODE][/QUOTE] [QUOTE=Xyzzy;420425][CODE]p101 factor: 18143578957112903024219208128943064170091130559885591410019364935058555931716545645750762965103877397 p111 factor: 844005220452601906434351028899191443018391831514749834501830373919894736659932478555984863889143657410223972729[/CODE][/QUOTE]Thanks guys, good work! unconnected: could you let me have your real name please, so that I can give you the correct credit? Paul |
I'll take GW_3_476 next.
|
GC_3_475 factored
1 Attachment(s)
About 25.5 hours after a slow start waiting for a previous job to finish up.
6.3M matrix on a Core-i5, -t 4 using target_density=120 (TD=124 failed). [CODE]p62 factor: 31700911063543846311835857974996310875211036872405494096345661 p62 factor: 51448853387154689366548847554909900838196345380386288956173021 p66 factor: 407764078437614725139759836851855958207251556603307029664461313309[/CODE] |
Paul: unconnected is Dmitry Domanov.
See also [url]http://escatter11.fullerton.edu/nfs/crunching.php[/url] :smile: |
[QUOTE=debrouxl;420671]Paul: unconnected is Dmitry Domanov.
See also [url]http://escatter11.fullerton.edu/nfs/crunching.php[/url] :smile:[/QUOTE] Thanks. Dmitry himself PMed me. Paul |
I'll take GW_4_375 next.
|
GC_4_376 now running
|
GW_3_476 factored
1 Attachment(s)
36 hours on a slightly loaded Core-i5 (3570K) to solve a 6.5M matrix, -t 4 with target_density=120.
[CODE]prp56 factor: 14635096559014854486553588561174243514764686911567877449 prp118 factor: 6164493023046673332418532316660557611905060788835161610151276347203162809399665371849604331478161694487871020569852753[/CODE] |
Since my role in factoring 4051^71-1 has so far been limited to poking and prodding others to commit their own resources, the least I can do is reserve it for post processing.
Edit: Oh wait, will it fit in 16 GiB...? I don't do large factorizations anywhere near enough to recall the answer of the top of my head |
[QUOTE=Dubslow;420760]Since my role in factoring 4051^71-1 has so far been limited to poking and prodding others to commit their own resources, the least I can do is reserve it for post processing.
Edit: Oh wait, will it fit in 16 GiB...? I don't do large factorizations anywhere near enough to recall the answer of the top of my head[/QUOTE] Yes it should fit easily in 16 Gb. |
[QUOTE=swellman;420773]Yes it should fit easily in 16 Gb.[/QUOTE]
Neat. In that case, assuming that no one has reserved C175_4788_5241, then I would like to post process that one as well. |
GC_4_376 done
1 Attachment(s)
Three-way split, which is a little unusual
[code] Fri Jan 1 03:38:37 2016 prp58 factor: 1697918852793197192217323908495025258053565572486150252069 Fri Jan 1 03:38:37 2016 prp58 factor: 2082414812562347487534209948536765479560302299495190215491 Fri Jan 1 03:38:37 2016 prp73 factor: 6312408474500341106936815071249860914101269298101832313661343150269184973 [/code] Sixteen or so hours for 5.3M matrix on i7/4790K, but don't believe this time because I forgot to stop mprime for the first few hours. [PASTEBIN]xz65v9J6[/PASTEBIN] |
[QUOTE=fivemack;420826]Three-way split, which is a little unusual
3,475+[/quote] Indeed, though Rich Dickerson also had a 3-way split yesterday for GC_3_475 into p62 * p62 *p66. Curious that adjacent entries in the 3+ table should behave like that. Paul |
GW_4_375 factored
1 Attachment(s)
22 hours to solve a 6.1M matrix on a Core-i5 (2500), -t 4 using target_density=112 (TD=120 failed).
[CODE]p112 factor: 4056928493609013958804639147971978303117812707562018799041101931093966442274853083123509995580542478762100510867 p116 factor: 49766600723887431286229655589628760624601439750538384432683252305743967895154190482041153258663513432598853842824527[/CODE] |
Taking GC_6_290 and GC_6_291.
|
GC_6_291 done
1 Attachment(s)
[code]
Sun Jan 3 00:13:29 2016 prp67 factor: 7128029182287052712024884582525109111910095569550925881449077101221 Sun Jan 3 00:13:29 2016 prp116 factor: 10636651627022111863672091779433066677624021733172213386201935909099491351431923386843729789723539897838698109846617 [/code] 22.5 hours for 6.6M density-110 matrix on 4 threads i7/4770K Log attached. |
Taking GW_5_323, ETA Thursday evening
|
GW_4_377
1 Attachment(s)
[CODE]p59 factor: 83183733331118795763978354021097038819918862223784769031299
p120 factor: 614208831276238939388701945744049795019787238427208608225289996654837009564461039743982141601834688980252140038410069219[/CODE] |
GC_6_290 done
[code]
Sun Jan 3 14:34:19 2016 p70 factor: 3838583084750444102740442464057059432973534582863054746119950413023069 Sun Jan 3 14:34:19 2016 p129 factor: 283816119704891256111630664948882422745194213705935241539967857502526755342090471257205774173006297665485778914972965504238027583 [/code] 47.3 hours on four cores not-completely-idle i7/750 (2.66GHz Nehalem) for a 6.93M density-120 matrix [pastebin]LXCeDv8K[/pastebin] |
I'll take [B]1373_79_minus1[/B]
|
I'll take C260_131_97; it's not very thoroughly sieved and I can't start it until Thursday so I've added a little more sieving
|
I'd like to reserve C211_128_95. These 32-bit jobs typically take me a month to post process on my hardware once sieving is complete, so results expected in late Feb.
|
C222_117_100b
1 Attachment(s)
[QUOTE=YuL;419526]I can take care of [B]C222_117_100b[/B].[/QUOTE]
There it is [CODE]Tue Jan 05 15:58:12 2016 p76 factor: 4120863248898883944673168794051506641951216528737442767203785019207683469561 Tue Jan 05 15:58:12 2016 p146 factor: 28954987302616180352654964701741155714626208359595775100002629763551892468230003518449413144276128353658237859816744334177861731730414003527776469 [/CODE]Linear algebra took 33.6 hours on Dual Xeon E5-2620, using 2x 6 threads. Failed to build matrix with td=132, 8.4M matrix built at td=120. Log is [URL="http://pastebin.com/dWYE1TqG"]here[/URL] or below. |
Reserving 1847_71_minus1.
|
Smallish postprocessing job
What are the chances of a post processing job taking around 9 hours to complete on a I 7 8 core CPU that will fit into 16 gig DDR 4 ram coming up?
|
[QUOTE=Speedy51;421358]What are the chances of a post processing job taking around 9 hours to complete on a I 7 8 core CPU that will fit into 16 gig DDR 4 ram coming up?[/QUOTE]
Nine hours is pretty short (though remember that post-processing saves checkpoints, and you can stop with ^C and restart with -npr); the newly-queued SNFS(22x) jobs from XYYXF may be small enough, C220_120_79 is the one I'd go for. |
GW_6_292
1 Attachment(s)
[CODE]p86 factor: 69741365574533752254594164738001803546481875701849576499767112517055557573515145035963
p126 factor: 240821496562258739804520358075572518952446628920635008271324609310827388004252224223625023809844725445327139092195009435513161[/CODE] |
GW_5_323
[code]
Thu Jan 7 16:34:33 2016 p63 factor: 678422165302644424098458656375303351235389822136143858739804111 Thu Jan 7 16:34:33 2016 p128 factor: 16443978886221186185526944247300408975348089429041274717695860629348226934786113660048017778627047529093496932303519277436211533 [/code] 95 hours for 8.6M matrix on four cores i7/750 (iMac) [pastebin]n0GVPeuM[/pastebin] |
Fib(1297) done
1 Attachment(s)
[code]
prp119 factor: 15011730273663201963722255337625432656400922499727257498162732053728877806786627805795673539959336308188095142217564781 prp127 factor: 9504352296967061894020903577315328743192719278286913745757826087407537145571335397076207246943633850031797916072424441972529521 [/code] 233.5 hours for 20.6M matrix on six cores i7/5820K Log attached |
[QUOTE=fivemack;421621][code]
prp119 factor: 15011730273663201963722255337625432656400922499727257498162732053728877806786627805795673539959336308188095142217564781 prp127 factor: 9504352296967061894020903577315328743192719278286913745757826087407537145571335397076207246943633850031797916072424441972529521 [/code] 233.5 hours for 20.6M matrix on six cores i7/5820K Log attached[/QUOTE] Nice a 100+ digit split, thanks! Did you mail Marin the factors? |
Taking 2027-71
Taking 2027^71-1
(ETA Monday morning) |
2027^71-1 done
[code]
Mon Jan 11 04:15:01 2016 p76 factor: 1204253002431366149859482948760832274442577274829552962838426912911894803237 Mon Jan 11 04:15:01 2016 p153 factor: 105109785025566811458601629348458511230953847327876108743764162477978501075269414541496577411099192147231131474007172573894334603065809329180245592190233 [/code] 29.6 hours for 8.5M density-120 matrix on 6 cores i7/5820K [pastebin]YZM6Xdzs[/pastebin] |
C260_131_97 running
ETA 400 hours (so around the end of January) for 25.8M matrix on 6 threads i7/5820K
I'm a little disappointed at the speed of the 5820K - I was expecting linear algebra to be RAM-limited and four-channel DDR4/2400 to be significantly faster than four-channel DDR3. Going to 12 threads got about a 15% slow-down over using six. I'm getting about 15% slower than an E5-2650v2 which has eight cores and DDR3. |
What happens if you shutdown HT and run on 6 cores?
|
Not much. On my 5820k with stock-speed memory, running 6 tasks in linux with HT disabled and 6 tasks with HT enabled produced results within 2% of each other (tested with 6x LLR, and also 6x lasieve). I assume the penalty is due to the linux scheduler sending two tasks to one physical core on occasion (I noticed a worse speed hit running 8 or 9 tasks, where a physical core would be idle sometimes while 3-4 cores had two tasks each).
|
Reserving C161_P170_plus_1
|
[QUOTE=VBCurtis;422127]Not much. On my 5820k with stock-speed memory, running 6 tasks in linux with HT disabled and 6 tasks with HT enabled produced results within 2% of each other (tested with 6x LLR, and also 6x lasieve). I assume the penalty is due to the linux scheduler sending two tasks to one physical core on occasion (I noticed a worse speed hit running 8 or 9 tasks, where a physical core would be idle sometimes while 3-4 cores had two tasks each).[/QUOTE]
I use 'taskset' to prevent the scheduler from doing such stupid things - it does seem able to take a task with six threads and assigned six cores and run one thread on each core reliably. |
It looks like we are about to run out of work to sieve.
[url]http://escatter11.fullerton.edu/nfs/crunching.php[/url] :mike: |
[QUOTE=Xyzzy;422231]It looks like we are about to run out of work to sieve.
[url]http://escatter11.fullerton.edu/nfs/crunching.php[/url] :mike:[/QUOTE]Curiously enough, I was just about to post on that very topic. Based on what I currently believe about the 14e queue, the GCW project has just about run out of candidates. My beliefs may be faulty, hence the decision to post. Right now, there are two <C140 runts which I'll complete in the next few days. These are the sub-S250 remainders: [c]226.37 7,265- C168 227.22 7,266- C173 227.27 8,249+ C178 227.27 8,249- C173 227.35 6,289- C160 227.44 2,746- C219 227.58 5,322+ C207 227.74 4,374- C215 [/c] which, I believe, are rather too easy for NFS@Home and which I've been keeping back for individuals. Even If I do them all myself, it won't take me more than a couple of months. I further believe that S250 is the upper limit for the 14e queue. As for GNFS, there are 12 <= C165 remaining. Excluding the runts, two more which are reserved and 6,289- above, there are seven. Polynomial searching will take a couple of days each with my resources. What are the true limits on the 14e queue? If the upper limit can be stretched there are six C166 and 28 <S252, of which 12 are < S251. Sam Wagstaff is already running ECM on the S250-260 range and removing them by SNFS starting at the high end. If the smaller ones can be added to NFS@Home I'll need to liaise with him carefully. Paul |
Reserving C221_118_81
|
| All times are UTC. The time now is 05:35. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.