![]() |
I get ~5 MB/s to my office computer on campus, so I guess the campus connection to the net is slow.
|
[QUOTE=frmky;352582]I get ~5 MB/s to my office computer on campus, so I guess the campus connection to the net is slow.[/QUOTE]
I have successfully finished the download and have started it. It must have been a slow connection to the outside world. |
I am averaging about 1 Mb/sec, though the server has completely stopped responding twice now during download. Maybe the third time will be the charm!:smile:
|
L1833 done
[code]
Sun Sep 8 00:14:40 2013 matrix is 10143931 x 10144110 (4474.6 MB) with weight 1291155840 (127.28/col) Tue Sep 10 14:21:05 2013 BLanczosTime: 227377 Tue Sep 10 16:30:30 2013 prp85 factor: 2556798347133136213660969719403046563880357137187399651458904401323735661288944824779 Tue Sep 10 16:30:30 2013 prp97 factor: 3636825604724255956128826676124884119368889189359797770321022688653360159664954529109251009170631 [/code] |
I'll take GW_6_280 when it's ready.
|
[QUOTE=Mathew;352075]May I reserve C203_125_51 & GW_6_284?[/QUOTE]
Complete C203_125_51: [CODE]prp98 factor: 19061175536434650225653270349663134322666526853870058594216673150239959232256555531972787878651417 prp106 factor: 3912807565287653415742638439008489969063610625902434283982267746582923104015918542265189385088717290665459[/CODE]GW_6_284: [CODE]prp70 factor: 1142657490372859913587140386999701900190431734756324704055895779159803 prp102 factor: 152268709242081702218544730976940377195047081403868606139842642649946281957465689892434463675903075699[/CODE]GC_6_284: [CODE]prp77 factor: 47278307325604377341480989538559835590944163017685880279425845092649813407567 prp140 factor: 16931097609987500836940459539348537619909071411038590441059084361274471904727660618388833501270493668916814240832582007180031091835871983451[/CODE] |
[QUOTE=swellman;352447]I'll take GC_8_241 if it's still available.[/QUOTE]
[code] prp88 factor: 1199081646240458794101547578644285629173566747017284011213426327665268242591011536711021 prp96 factor: 156821390478312776843429065432462681201808553628085123152530607125757774095856489341182817192329 [/code] |
1 Attachment(s)
GW_8_241 is complete
[code]prp64 factor: 7094806455440387069683464673147494537674907344832765383479368439 prp118 factor: 2777263686204538294294875129904244933201015355332851788018114133594147121348777091904148703288796854858589920198976813[/code] |
[QUOTE=swellman;352674]I'll take GW_6_280 when it's ready.[/QUOTE]
I'm now running LA on this. ETA is 36 hours from the time of this posting. I'll take GC_4_363 once it is finished sieving. |
I seem to be making a habit of this ...
Taking L1839 (maybe something at fullerton needs poking, I had an initial estimate of 18 days download time for the .dat.gz)
|
There's a number of LA jobs ready to start if people are interested.
|
[QUOTE=swellman;352955]
I'll take GC_4_363 once it is finished sieving.[/QUOTE] Downloading this file within the hour. Feel free to assign me another as well - I don't have a preference as they are all the same size. Thanks. |
GW_6_280
[code] prp91 factor: 1238393042616996607329136977401179479867139865210058231383306601505891820084057543727755359 prp124 factor: 5945506987887089770667082571644962294385840458585053226859888341225130806734179704516516177329855061801793999827225610578699 [/code] |
[QUOTE=frmky;352525]C244_129_82 (finally!)
[CODE]prp93 factor: 371973750130044234197418889755790270718886656924302388949072584796597544757455801234230444367 prp152 factor: 18986863126655423891122302911893073934873395063472952429956897681870694000274915582576147172435693147822438462220279280115323862572251457040421095685961 [/CODE][/QUOTE][QUOTE=Mathew;352786]Complete C203_125_51: [CODE]prp98 factor: 19061175536434650225653270349663134322666526853870058594216673150239959232256555531972787878651417 prp106 factor: 3912807565287653415742638439008489969063610625902434283982267746582923104015918542265189385088717290665459[/CODE][/QUOTE]Thank you people! :) |
[QUOTE=XYYXF;347439]And two pretty GNFS targets:
C168_127_110 [code]518759670509518390499884894142825232305789370205934770356684820953606669616234831388561087386018771667622991938328056602692240129084683654895676978808741395629768407883[/code] C168_130_119 [code]619210585289939300853894524032703690620745598172616026950373110134063171003372954902148138740033081843433800699129174962738134140333754085866583619110148324516327414223[/code] Both ECMed up to 7600@43M.[/QUOTE] Was a decision announced on these? If NFS@Home is willing to tackle them, I could [url=http://www.mersenneforum.org/showthread.php?t=18368&page=19]request a pair of GPU polynomial searches[/url]. |
[QUOTE=swellman;353196]Was a decision announced on these? If NFS@Home is willing to tackle them, I could [url=http://www.mersenneforum.org/showthread.php?t=18368&page=19]request a pair of GPU polynomial searches[/url].[/QUOTE]
We'll do them. Go ahead and solicit polynomials. :smile: |
[QUOTE=frmky;353210]We'll do them. Go ahead and solicit polynomials. :smile:[/QUOTE]They are to be processed with B1 = 260M by yoyo@home:
[url]http://www.rechenkraft.net/yoyo/download/download/stats/ecm/xy/wu_status[/url] Let's wait a few days to prevent ECM misses :-) |
GC_4_363
[code] prp62 factor: 13053083707280729676903792683794740542008969806862451784192823 prp104 factor: 79914694839755829228338990591743922045295028806819048669831754035442884506110839431407281834701821825009 [/code] |
I'll take GC_9_230 and GW_9_230.
|
I'll take GW_5_313 and GW_10_219 as well.
|
W818 finally done
[code]
Sat Sep 7 14:03:56 2013 matrix is 18326528 x 18326705 (5507.1 MB) with weight 1617106910 (88.24/col) Tue Sep 17 09:36:26 2013 prp65 factor: 50528512367415089439910312729751888424259069971569139129253463497 Tue Sep 17 09:36:26 2013 prp108 factor: 133263680288804765589060222823771908727323225238050255063563276103874710950824782465754751076312235750155997 [/code] That was quite a large matrix and took longer than I was expecting. |
[QUOTE=fivemack;353321][code]
Sat Sep 7 14:03:56 2013 matrix is 18326528 x 18326705 (5507.1 MB) with weight 1617106910 (88.24/col) Tue Sep 17 09:36:26 2013 prp65 factor: 50528512367415089439910312729751888424259069971569139129253463497 Tue Sep 17 09:36:26 2013 prp108 factor: 133263680288804765589060222823771908727323225238050255063563276103874710950824782465754751076312235750155997 [/code] That was quite a large matrix and took longer than I was expecting.[/QUOTE] Has 2,947+ been started? |
GC_4_366
[code] prp56 factor: 15621852778733658940655600090319895772399185993908016799 prp130 factor: 3636282061985348901600965088403449886145330881960512594640056858070492822078471751795866819557300424196256576808172077226766253921 [/code] I'll take GW_12_203 |
GW_5_313
[code] prp89 factor: 44077617557442231636140062329866808251406562158052261413542525808453012191827303942342913 prp124 factor: 2832461635451521814320517906929219425148430423085907313710642652806344391546835817157772129815928256241548676147266173099771 [/code] |
L1839 done
RDS: 2^947+1: no
[code] Tue Sep 17 09:10:30 2013 matrix is 14960303 x 14960480 (4500.5 MB) with weight 1314966044 (87.90/col) Sat Sep 21 14:39:12 2013 BLanczosTime: 368940 Sat Sep 21 16:45:47 2013 prp71 factor: 41058725575548550176141642752800144216447592731576808196032677620997321 Sat Sep 21 16:45:47 2013 prp128 factor: 34693801525699891706672798257569328005627970762999529925204990192440339484605896427794692060099952307744111379753536293279318411 [/code] |
14e Lattice Sieve out of work
Powers at be may already be aware if not a heads up that we are out of 14 E tasks
|
It takes me a day or two to try to download the .dat file but usually someone beats me to it. This is a fast moving queue. :smile:
|
maybe jumping the gnu
In the hope of getting factors before I flee the country for twelve days starting on Thursday, I'm collecting the current state of C178_8352_1755 and starting linear algebra ASAP.
|
I'll take GC_11_211 and GW_3_464.
Side note: I'm struggling to get msieve to start filtering GW_12_203. Keeps kicking it out with "NFS input not found in factor base file" error. I've since read the fix for this problem but have yet to implement it. Will try it tonight. If unsuccessful, I may have to throw this one back. |
[QUOTE=fivemack;353782]In the hope of getting factors before I flee the country for twelve days starting on Thursday, I'm collecting the current state of C178_8352_1755 and starting linear algebra ASAP.[/QUOTE]
ETA 103 hours, so Friday lunchtime; I think I've set things up so I can log in from Canada, but if not then the factors will be delayed for about two weeks. |
GW_10_219
[code] prp59 factor: 71659508739728394801693603006389844041361016271037918032999 prp132 factor: 369855746369382351525594959060361131188871575417399536608971075781306997008601940722277427828684881122559681894270534285680875739613 [/code] |
GC_9_230
[code] prp72 factor: 323490956959296645274413380908054116491734924997158720621592394771363707 prp95 factor: 23981649017639344016322477289182205008446584443035316622463556956305374261615759812503753471073 [/code] |
GW_12_203
[code] prp157 = 1903386225144839539649300158222751260235875608040256168606438116888358190109659157942162612449189082120374931041394244235715558329456383375373854304998258513 prp6 = 307423 prp60 = 411174059077483514491798810862462861462950427159981255811217 [/code] The prp6 is a bit disappointing. I didn't bother running this composite through TF and other pre-NFS procedures, as I assumed it had been done years ago. Mea culpa. eta: the prp6 explains the persistent crash in msieve I experienced when trying to postprocess this number. Another bad assumption on my part... |
GW_9_230
[code] prp52 factor: 2337993185318477535090820126928380126860188973503009 prp113 factor: 53072477980286219195436936632617628035531699665328652296289886337411391604233873371897099375997416186214074250333 [/code] |
[QUOTE=swellman;354074]GW_12_203
[code] prp157 = 1903386225144839539649300158222751260235875608040256168606438116888358190109659157942162612449189082120374931041394244235715558329456383375373854304998258513 prp6 = 307423 prp60 = 411174059077483514491798810862462861462950427159981255811217 [/code] The prp6 is a bit disappointing. I didn't bother running this composite through TF and other pre-NFS procedures, as I assumed it had been done years ago. Mea culpa. eta: the prp6 explains the persistent crash in msieve I experienced when trying to postprocess this number. Another bad assumption on my part...[/QUOTE]I suspect that someone failed to put remove the known factors or, equivalently, didn't use my comps.gz file to create the NFS input data. In that file the number is given as C216, which is the product of P60 and P157. BTW, could you mail me at [email]paul@leyland.vispa.com[/email] when you have the results please? Otherwise you run the risk of my failing to credit you for your work. Paul Paul |
[QUOTE=swellman;354074]GW_12_203
[code] prp157 = 1903386225144839539649300158222751260235875608040256168606438116888358190109659157942162612449189082120374931041394244235715558329456383375373854304998258513 prp6 = 307423 prp60 = 411174059077483514491798810862462861462950427159981255811217 [/code] The prp6 is a bit disappointing. I didn't bother running this composite through TF and other pre-NFS procedures, as I assumed it had been done years ago. Mea culpa. eta: the prp6 explains the persistent crash in msieve I experienced when trying to postprocess this number. Another bad assumption on my part...[/QUOTE] If you discover that has happened in the future all that is necessary to fix it is to manually alter the .fb file to have the smaller n in it. |
[QUOTE=xilman;354093]I suspect that someone failed to put remove the known factors or, equivalently, didn't use my comps.gz file to create the NFS input data. In that file the number is given as C216, which is the product of P60 and P157.
BTW, could you mail me at [email]paul@leyland.vispa.com[/email] when you have the results please? Otherwise you run the risk of my failing to credit you for your work. Paul Paul[/QUOTE] No worries. Stuff happens. And I learned a bit more about msieve. I was emailing them but got out of the habit once the jobs started coming fast and furious. Will be sure to email results to you in the future. Henryzz - thanks for the tip! |
GC_11_211
[code] prp78:123997853745103810898592127823340672788750125453660245917638293925530720589367 prp90:374974154615165034582027087727977336260707838747752784098938213051367696242665157137768459 [/code] |
I'll take GW_5_317.
|
The machine running 8352.1755 is not responding to pings. It is 5000km away. Results 10 October or so.
|
C162_3408_1361 is complete and in the database.
|
GW_3_464
[code] prp75 factor:531079406912956027520837619853024845726589135927192979046654115869936445043 prp150 factor:211651187210752021759406115241594012203207924613189134068818820882300983789980113168094879772380871865355443438574310464715328006646061158169532899381 [/code] |
I would like to reserve GC_8_245.
|
GC_8_245 splits as:
[CODE]prp76 factor: 4266237991646787625333814078482225418560811686106956188535094517017767659851 prp121 factor: 4579094096019031324858746845381876173235894730111291590822157806277746941916937995013205409964978360793687657293386342717[/CODE] |
GW_5_317
[code] prp66 factor: 301559868848687539421554732959330647858161767980957966619588583499 prp154 factor: 2534704528013142527989820268811312110873186732210085716607339413146198867890685015301099392440243198457064410780180074553435626775749206823252011899062867 [/code] |
[QUOTE=debrouxl;351602]t55 for a SNFS difficulty 239 is a bit above the 2/9 rule of thumb. You could do a bit less than that, say 14K curves at B1=11e7 :smile:
[/QUOTE] I've ECM'd C229_125_81 for 14000+ curves @B1=110M with no factors found. If your offer still stands, it would seem ready for SNFS. Suggested poly is [url=http://www.mersenneforum.org/showpost.php?p=348864&postcount=787]here[/url]. |
GNFS target C168_127_110 passed 20k curves at B1 = 260M:
[code]518759670509518390499884894142825232305789370205934770356684820953606669616234831388561087386018771667622991938328056602692240129084683654895676978808741395629768407883[/code] |
Reserving both C229_125_81 (for which I've already pre-processed the polynomial) and C168_127_110 (for which Sean created a post in the polynomial request thread).
|
I would like to reserve GW_11_213 which I will download tomorrow.
I should start -nc in about 24+ hours if all goes well. |
I'll take GC_10_246.
|
Finally done with 8352.1755
[code]
prp71 factor: 66793795168210847527387030310220396586387046273923740258376935235867777 prp107 factor: 67389516722589028574167872360840601579872985743357318676103384031021224355611971618397282275442983615255517 [/code] |
GW_11_213
... splits as:
[CODE]prp88 factor: 6503296036223630648761997557939583965790156813803739558991733383152257050963107239101451 prp126 factor: 802258661143150048191962866982741615259300368910308808197988385724807594199397593673480509620790813481936943813270598810731339[/CODE] |
[QUOTE=swellman;355626]I'll take GC_10_246.[/QUOTE]
The data file for this composite seems to be corrupt. I've downloaded it twice in two different environments with identical results: most of the relations starting at relation number ~217M are bad. Constant error messages until msieve eventually seg faults. Can this be repaired on the NFS@Home side? Or is there something I should try on my end? |
I'll look into it...
|
[QUOTE=XYYXF;347439]And two pretty GNFS targets:
C168_127_110 [code]518759670509518390499884894142825232305789370205934770356684820953606669616234831388561087386018771667622991938328056602692240129084683654895676978808741395629768407883[/code] C168_130_119 [code]619210585289939300853894524032703690620745598172616026950373110134063171003372954902148138740033081843433800699129174962738134140333754085866583619110148324516327414223[/code][/QUOTE]21k and 19k curves, respectively, at B1 = 260M. |
That's more than enough :smile:
Polynomial selection is underway for C168_127_110. |
[QUOTE=debrouxl;356377]
Polynomial selection is underway for C168_127_110.[/QUOTE] [url=http://www.mersenneforum.org/showpost.php?p=356360&postcount=296]Poly may have been found.[/url] |
I'll take GC_6_285 and C229_125_81.
|
If it's alright, I'd like to try W_2_736. I did the post-processing on an SNFS 213 in ~13 hours, so I think I can handle a bit more over the weekend. If this is alright, please let me know where to download the relations file and all that good stuff.
Edit: I'd be running it on a hyperthreaded quad-core with 8GB RAM. If that won't be sufficient, please feel free to say so! |
GC_6_285
[code]
prp73:2830496919095189172910245407971922119619136749819850199184552911838434631 prp120:233783185469678342843080189310686233451945713636513875276326824830654107938419139227271454300475108160542928795287319583 [/code] |
There are 5 numbers ready for postprocessing. Any takers?
|
I hate to do it, but I need to back out of the post-processing of W_2_736. I'm running into errors that I can't find a good answer for in spite of multiple attempts to fix them.
|
wombatman - are you using Linux or Windows? I was running into problems as well, found Windows to be far more stable.
frmky - C229_125_81 is currently in LA and should finish in a week. I won't have any resources free until then. But I will be able to tackle C168_127_110 at that time, if no one else wants it before then. |
I wish I could edit the previous post, but please disregard it. I was able to get a 64-bit MSieve compiled that has made it to the linear algebra step, so I should (hopefully) be able to complete the post-processing of W_2_736.
Edit: swellman, I'm running on Windows 7 64-bit. The issue appears to be related to gcc's need for 32-bit dlls. So even when I compile a 64-bit program (which will identify as 64-bit with "file") in MinGW-64, it tries to get a 32-bit dll and errors out if too much RAM is used. By contrast, this 64-bit Visual Studio compiled MSieve (SVN 946) happily used ~2.7GB of RAM with no issue. Edit: [CODE]linear algebra at 0.0%, ETA 17h14m 228679 dimensions (0.0%, ETA 17h14m)[/CODE] |
Great news! Glad you got msieve running and successfully digesting the relations file.
I had an issue in Linux with the data file for C229_125_81. Twice I got it downloaded without error but then it refused to extract. Rebooting into Windows and repeating resulted in success. Can't explain it. |
That is strange indeed. Maybe one of those issues where the escape character matters? Who knows!
|
Frmky -- I'd be happy to take another number in a day or two when I finish W_2_736. I look on the NFS@Home site to see which were queued for postprocessing and didn't see any. What are the choices?
|
I've updated it.
|
I'd like to take GC_3_465 and W_2_737, please!
|
I would like to give post-processing a go, if pre-compiled binaries are available for W7 x64 and some basic instructions? What are the requirements? I've got a 3770k with 8 GB.
|
Victor,
I have a binary linked here: [url]http://www.mersenneforum.org/showpost.php?p=357227&postcount=10[/url] You will also need to download pthreadVC2.dll (if you can't find it online, let me know and I'll send it to you). As for instructions, this is a good place to start: [url]http://gilchrist.ca/jeff/factoring/nfs_beginners_guide.html[/url] |
Thanks for the comprehensive guide!
I've installed the software and binaries: - Visual C++ 2013/2012 - Python and Notepad++ (for the factmsieve.py script) - GGNFS SVN413 and MSIEVE 1.52 SVN 939 (your msieve gives an error at the polysearch?) I'm now trying the example of the guide (the 100 digit composite). Polyselection is done, it's now running the siever on the 4 cores (8 threads). So far it is at 58% (2.3M relations out of estimated minimum 4.1M relations). I expect it to start Linear Algebra in 1 hour or so, lets wait and see how things go. |
If the factmsieve script has CUDA=TRUE in it, my msieve will definitely give an error since it was compiled without CUDA added. That would be one thing to check.
|
W_2_736 factors as:
[CODE]prp51 factor: 557407254155247166712738656005711709410906698630783 prp52 factor: 5560881101474937445745768799788053987340369105823599 prp58 factor: 1280564727679448478678574877582570543716410369024706830467[/CODE] |
VictordeHolland - You should also check out Yafu. It has it's own suforum. It ties together msieve, ggnfs, gmp-ecm as well as many innate functions.
For post processing the NFS@Home jobs, you only need to use msieve. PM frmky for the url and credentials if you don't have them already. Once you reserve and download/extract/rename the appropriate files to the msieve directory, just run a single command (-nc) in msieve to run filter, LA and sqrt routines. The msieve readme has details. One hard lesson - LA efficiency drops off a cliff with hyper threading. In your case use 3-4 threads. e.g. [code] msieve -v -nc -t 3 [/code] |
Man, I just learned something myself. I've been using -t 8 for a quadcore with hyperthreading. I'll have to try the -t 4 and see how it looks.
|
I've contacted frmky and he provided me with an login for the data files.
So I'll take GC_8_246 . |
[QUOTE=wombatman;357321]Man, I just learned something myself. I've been using -t 8 for a quadcore with hyperthreading. I'll have to try the -t 4 and see how it looks.[/QUOTE]
You will see a major improve. |
I don't know about major, but the expected time went from ~20.5 hours to ~18 hours at the start. Once it finishes, I'll try 3 threads and see how that looks.
|
[QUOTE=wombatman;357406]I don't know about major, but the expected time went from ~20.5 hours to ~18 hours at the start. Once it finishes, I'll try 3 threads and see how that looks.[/QUOTE]
Turn off HT. |
Ah, ok. I'll try that out tonight. Why does turning off hyperthreading give such an assist?
|
msieve has to do a lot of work to communicate between threads during linear algebra. With hyperthreading the overhead from this exceeds the extra CPU power.
Sieving on the other hand runs independently. So in my experience running 8 sievers in parallel on a system with 4 cores + hyperthreading gets about 25% more releations per minute than running 4 sievers. Chris |
Good to know. Unfortunately, I don't see an option in my BIOS to turn off hyperthreading. If I use 4 threads instead of 8, does hyperthreading play a significant role there?
Also, for what it's worth, I tried -t 4 and -t 8 just now to see what difference there might be in the time. With -t 4, the expected time was ~9h 15 min (and slightly climbing). With -t 8, it's ~8h and dropping. So maybe some of the issues with hyperthreading have been improved upon? |
HT is automatically engaged - not sure there is a way to turn it off other than don't use more than 4 threads.
You're working with 30 bit lpb there. Try it with a 31 bit. LA can take a week on factorizations of that size. I'd be curious to see how much the ETA changes. I also note you are running the latest msieve, which includes a huge improvement in LA performance. Perhaps this mitigates much of the HT effect. But I'm just guessing. Tried to run your compiled binary for msieve but had problems. Will direct my questions to your other thread, if you are willing. |
Absolutely. You can PM as well if you'd like.
|
Many boards(I think most) have the option to turn off HT in BIOS.
|
Most of them do. This is a laptop, though, so it may be a bit more locked down. I went through all the BIOS options, and I couldn't even find things like the option to change RAM timings.
|
W_2_737 factors as:
[CODE]prp55 factor: 3639948209785681640330982423344863288901552433435622717 prp121 factor: 9667131945953633669529391194502837192293510216854828896126077765685195914196332441160274502127300016683619846239468417937[/CODE] Moving on to GC_3_465! |
GC_8_246 is ready:
[code] prp64 factor: 7450391778732693727105503441634852941715710901826974899928715291 prp132 factor: 778676426970473145106077221395200338755770932585955845450149945184597427041012733530986228927830026349766204465941888300244625682623 [/code] For the statistic junkies (including myself) some highlights of the log: [code] Sat Oct 26 00:55:04 2013 Msieve v. 1.51 (SVN Official Release) Sat Oct 26 00:55:04 2013 factoring 5801444449793761902554310998722384964736331243283895289517387025395045627296796552343499559511025101275657028584214475441669022037825124493702315062835511260606214091477648754354144392100293088293 (196 digits) Sat Oct 26 00:55:05 2013 commencing relation filtering Sat Oct 26 00:55:05 2013 estimated available RAM is 8145.3 MB Sat Oct 26 00:55:05 2013 commencing duplicate removal, pass 1 Sat Oct 26 01:40:29 2013 found 19165304 hash collisions in 128889877 relations Sat Oct 26 01:41:02 2013 added 1217912 free relations Sat Oct 26 01:41:02 2013 commencing duplicate removal, pass 2 Sat Oct 26 01:46:31 2013 found 15702994 duplicates and 114404795 unique relations Sat Oct 26 02:31:35 2013 memory use: 3083.1 MB Sat Oct 26 02:45:40 2013 RelProcTime: 6635 Sat Oct 26 02:45:40 2013 commencing linear algebra Sat Oct 26 02:55:57 2013 matrix is 4651773 x 4651951 (1385.7 MB) with weight 429481219 (92.32/col) Sat Oct 26 02:55:57 2013 sparse part has weight 312094378 (67.09/col) Sat Oct 26 02:55:57 2013 saving the first 48 matrix rows for later Sat Oct 26 02:55:58 2013 matrix includes 64 packed rows Sat Oct 26 02:55:59 2013 matrix is 4651725 x 4651951 (1343.9 MB) with weight 340905364 (73.28/col) Sat Oct 26 02:55:59 2013 sparse part has weight 305764606 (65.73/col) Sat Oct 26 02:56:16 2013 commencing Lanczos iteration (3 threads) Sat Oct 26 20:20:26 2013 BLanczosTime: 63286 Sat Oct 26 20:20:26 2013 commencing square root phase Sat Oct 26 21:03:48 2013 reading relations for dependency 3 Sat Oct 26 21:03:49 2013 read 2327042 cycles Sat Oct 26 21:03:52 2013 cycles contain 6230122 unique relations Sat Oct 26 21:09:35 2013 read 6230122 relations Sat Oct 26 21:10:09 2013 multiplying 6230122 relations Sat Oct 26 21:17:03 2013 multiply complete, coefficients have about 196.70 million bits Sat Oct 26 21:17:05 2013 initial square root is modulo 11453653 Sat Oct 26 21:25:48 2013 sqrtTime: 3922 Sat Oct 26 21:25:48 2013 prp64 factor: 7450391778732693727105503441634852941715710901826974899928715291 Sat Oct 26 21:25:48 2013 prp132 factor: 778676426970473145106077221395200338755770932585955845450149945184597427041012733530986228927830026349766204465941888300244625682623 Sat Oct 26 21:25:48 2013 elapsed time 20:30:44 [/code]Taking [B]GC_12_206[/B] next. |
GC_3_465 factors as:
[CODE]prp84 factor: 997449132784140554089559851778056222463774021735548392543653838242 777809872509410633 prp108 factor: 105147348859843817402740035407465879164708293212673130156267228488506925302251314718802799385933747532241997 elapsed time 27:19:55[/CODE] I'll finish off GC_7_263. |
I'm having issues post-processing GC_12_206. First of all, I can't download GC_12_206.poly from NFS@home. I tried running it without the poly and this happened:
[CODE] <1000+ reading relation errors> error -11 reading relation 75446975 error -5 reading relation 75446976 error -1 reading relation 75446977 error -9 reading relation 75446978 error -1 reading relation 75446979 error -5 reading relation 75446980 error -1 reading relation 75446981 error -5 reading relation 75446982 error -5 reading relation 75446983 modsqrt_1 failed[/CODE]After that MSIEVE either shut down by itself, or windows states the program stops responding. Happens on both the official 1.51 version and MSIEVE 1.52 SVN939 (win64_i7). How should I proceed? |
If you're using the factmsieve.py script, it will make the .poly file for you. I'm also getting the modsqrt_1 failure for GC_7_263. I'm going to try redownloading the relations file tomorrow and see if I get the same error.
|
You can first use remdups on the file. If you are using linux, a binary is in the download directory.
zcat rels.dat.gz | remdups4 800 -v > msieve.dat I don't have a Windows binary unfortunately... |
As it turns out, if you have MinGW (or MinGW-64), you don't need a binary! I copied the source from here: [url]http://dubslow.tk/random/remdups4.c.txt[/url]
Compiled with [CODE]gcc -O3 -o remdups remdups.c[/CODE] and it worked without issue. Currently starting on GC_7_263! Edit: Seems I spoke too soon. Now I'm getting the "wants 1000000 more relations" message. I guess it's suspicious that the 12+GB uncompressed relations file went to ~3GB after processing through remdups. |
Here's what I get running your command (limited to the end part, of course), Frmky:
[CODE]Mon Oct 28 12:20:16 2013 31.5M unique relns 1.59M duplicate relns (+0.05M, avg D/U ratio in block was 9.4%) Found 31809992 unique, 1617420 duplicate (4.8% of total), and 181942 bad relations. Largest dimension used: 147 of 800 Average dimension used: 97.1 of 800 Terminating program at Mon Oct 28 12:20:18 2013[/CODE] Does that seem right to you? |
No. It terminated before the end of the file. I'll look into it.
|
[QUOTE=wombatman;357744]Here's what I get running your command (limited to the end part, of course), Frmky:
[CODE]Mon Oct 28 12:20:16 2013 31.5M unique relns 1.59M duplicate relns (+0.05M, avg D/U ratio in block was 9.4%) Found 31809992 unique, 1617420 duplicate (4.8% of total), and 181942 bad relations. Largest dimension used: 147 of 800 Average dimension used: 97.1 of 800 Terminating program at Mon Oct 28 12:20:18 2013[/CODE] Does that seem right to you?[/QUOTE] Based on this report, you don't need [I]dim[/I] value (hash depth) to be above 150. (you may have run out of memory with dim=800.) Could you retry with [FONT="Courier New"]zcat rels.dat.gz | remdups4 [B][COLOR="DarkRed"]150[/COLOR][/B] -v > msieve.dat[/FONT] |
| All times are UTC. The time now is 10:17. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.