![]() |
Carlos, make sure to use the latest SVN before attempting the matrix; the cache detection code was just fixed to work better on modern processors, and you get a huge slowdown in the LA if the cache size is misreported.
|
[QUOTE=jasonp;234340]Carlos, make sure to use the latest SVN before attempting the matrix; the cache detection code was just fixed to work better on modern processors, and you get a huge slowdown in the LA if the cache size is misreported.[/QUOTE]
Thank you. I'll have to contact Jeff to build the new binaries. |
Done with [URL="http://www.sendspace.com/file/76wc8l"]62M-64M[/URL].
|
[QUOTE=bchaffin;234354]Done with [URL="http://www.sendspace.com/file/76wc8l"]62M-64M[/URL].[/QUOTE]
Got it! |
64M-65M complete.
|
[url=http://www.sendspace.com/file/h5031i]58.5-60M[/url]: 2382326 relations.
I'll take 68-70. |
[QUOTE=bsquared;234383][URL="http://www.sendspace.com/file/h5031i"]58.5-60M[/URL]: 2382326 relations.
I'll take 68-70.[/QUOTE] Downloaded. |
As soon as I finish my current work and get Andi47 and Schickel relations I will run a relations count, ok? Last time I ran I got 60M unique relations.
|
[url=http://www.sendspace.com/file/ajg58k]68-69M[/url]: 1508282 relations.
|
15.0-15.5M is done, 1101038 relations: [url]http://www.sendspace.com/file/mo1swv[/url]
|
Done with [URL="http://www.sendspace.com/file/vrm9ih"]65M-66M[/URL].
|
[QUOTE=Andi47;234464]15.0-15.5M is done, 1101038 relations: [URL]http://www.sendspace.com/file/mo1swv[/URL][/QUOTE]
You say 15-15.5M but on the file it says 15.16. So? Another question, you have 4 files, you divided them by 0.250M or by 0.125M? EDIT: By the size of the files I suppose you finish 15-15.5M. [QUOTE=bsquared;234450][URL="http://www.sendspace.com/file/ajg58k"]68-69M[/URL]: 1508282 relations.[/QUOTE] [QUOTE=bchaffin;234467]Done with [URL="http://www.sendspace.com/file/vrm9ih"]65M-66M[/URL].[/QUOTE] All downloaded. |
[QUOTE=em99010pepe;234468]You say 15-15.5M but on the file it says 15.16. So? Another question, you have 4 files, you divided them by 0.250M or by 0.125M?
EDIT: By the size of the files I suppose you finish 15-15.5M. [/QUOTE] the file is [I]accidentally[/I] named 15.16, sorry. But it is just 15-15.5M. 15.5-16M should be complete by tonight. |
Done with [URL="http://www.sendspace.com/file/94tw5j"]66M-68M[/URL].
I'll take 70M-72M next. |
[QUOTE=bchaffin;234544]Done with [URL="http://www.sendspace.com/file/94tw5j"]66M-68M[/URL].
I'll take 70M-72M next.[/QUOTE] Downloaded. |
I'm in the process of uploading.....very slow going: 19-20K/sec with 1.4GB to transfer. Looks like I'll be done by Friday for sure.
|
[url=http://www.sendspace.com/file/upee1d]69-70M[/url]:1496065 relations.
I'll take 72-75M. |
[QUOTE=bsquared;234628][URL="http://www.sendspace.com/file/upee1d"]69-70M[/URL]:1496065 relations.
I'll take 72-75M.[/QUOTE] Downloaded. [QUOTE=schickel;234620]I'm in the process of uploading.....very slow going: 19-20K/sec with 1.4GB to transfer. Looks like I'll be done by Friday for sure.[/QUOTE] My range will finish within a few hours. I will run -nc to check for total unique relations after I get your files, then we can decide if post-processing can be started. |
My file of 15.5M-6M is currently uploading; 1091882 relations. (should be done in ~half an hour.)
|
[QUOTE=Andi47;234664]My file of 15.5M-6M is currently uploading; 1091882 relations. (should be done in ~half an hour.)[/QUOTE]
upload complete (faster than I thought it would be): [url]http://www.sendspace.com/file/0y29mi[/url] |
80M-88M done.
[QUOTE=Andi47;234665]upload complete (faster than I thought it would be): [URL]http://www.sendspace.com/file/0y29mi[/URL][/QUOTE] Downloaded. |
[QUOTE=schickel;234620]I'm in the process of uploading.....very slow going: 19-20K/sec with 1.4GB to transfer. Looks like I'll be done by Friday for sure.[/QUOTE]
You can start posting links for the parts already uploaded. |
Boy, when they say ADSL, they mean [B]A[/B]DSL. could only manage <20K upstream and had to reboot once to finish the upload.
Just a hair over 24M relations. (Almost 1.5GB; added bonus, 58,000 -11 errors....) [url]http://www.sendspace.com/file/2rkksh[/url] [url]http://www.sendspace.com/file/0aa1yl[/url] [url]http://www.sendspace.com/file/t92g8d[/url] [url]http://www.sendspace.com/file/bdidfu[/url] [url]http://www.sendspace.com/file/igclcp[/url] [url]http://www.sendspace.com/file/estuhv[/url] There's one stand alone file and one RAR file split with [URL="http://www.freebyte.com/hjsplit/"]HJSplit[/URL].... Frank |
[QUOTE=schickel;234737]Boy, when they say ADSL, they mean [B]A[/B]DSL. could only manage <20K upstream and had to reboot once to finish the upload.
Just a hair over 24M relations. (Almost 1.5GB; added bonus, 58,000 -11 errors....) [URL]http://www.sendspace.com/file/2rkksh[/URL] [URL]http://www.sendspace.com/file/0aa1yl[/URL] [URL]http://www.sendspace.com/file/t92g8d[/URL] [URL]http://www.sendspace.com/file/bdidfu[/URL] [URL]http://www.sendspace.com/file/igclcp[/URL] [URL]http://www.sendspace.com/file/estuhv[/URL] There's one stand alone file and one RAR file split with [URL="http://www.freebyte.com/hjsplit/"]HJSplit[/URL].... Frank[/QUOTE] Got them all. |
Frank,
I get files corrupted. I joined the files with HJSlipt but when unpacking I get CRC errors. Put the dat file in your FTP server. Or if someone could download the files and check them to see if the problem it is not mine I appreciate. Carlos |
[QUOTE=em99010pepe;234746]Frank,
I get files corrupted. I joined the files with HJSlipt but when unpacking I get CRC errors. Put the dat file in your FTP server. Or if someone could download the files and check them to see if the problem it is not mine I appreciate. Carlos[/QUOTE]Bah! I guess the SendSpace Wizard isn't quite as goog as it needs to be.... Let me PM you with the CRCs from HJSplit and maybe it's only one file. Sorry....Frank |
[QUOTE=schickel;234747]Bah! I guess the SendSpace Wizard isn't quite as goog as it needs to be.... Let me PM you with the CRCs from HJSplit and maybe it's only one file.
Sorry....Frank[/QUOTE] Yep, it's the first one, a4788-16m-30m.rar.001. |
1 Attachment(s)
I got a lot of errors reading Frank's relations but at the end I got 85.4 M unique relations found so far. I don't know if those relations are also counted at the end.
Relations needed: [B]~97M unique[/B] Relations received: [B]85.4M unique (~88.0%)[/B] |
Done with [URL="http://www.sendspace.com/file/pevbo4"]70M-72M[/URL].
I'll take 75M-77M next. |
[QUOTE=bchaffin;234910]Done with [URL="http://www.sendspace.com/file/pevbo4"]70M-72M[/URL].
I'll take 75M-77M next.[/QUOTE] Downloaded. Do we need to go beyond 92M? |
[url=http://www.sendspace.com/file/w8qso3]72-75M[/url]: 4409292 relations
|
[QUOTE=bsquared;235073][URL="http://www.sendspace.com/file/w8qso3"]72-75M[/URL]: 4409292 relations[/QUOTE]
Got it. I think I will start the post-processing without the 77M-80M range, bsquared, your thoughts? |
Done with [URL="http://www.sendspace.com/file/f0nn1d"]75M-77M[/URL].
|
[QUOTE=bchaffin;235086]Done with [URL="http://www.sendspace.com/file/f0nn1d"]75M-77M[/URL].[/QUOTE]
Downloaded. I scheduled the start of the post-processing for next Wednesday. So if you guys can manage to clean 77M-80M until then I appreciate. EDIT: One thing, Jeff's still needs to release a new msieve binary based on the new SVN. I already contacted him but I think he is very busy. |
I'll finish up 77-80. I think we should have plenty of relations for a matrix, the only question is how big it will be. If you post the log file after running the filtering step, we can comment on it then.
I can probably build a binary for you too, if necessary. What platform (windows, linux, 32, 64 bit) are you on? |
[QUOTE=bsquared;235091]I'll finish up 77-80. I think we should have plenty of relations for a matrix, the only question is how big it will be. If you post the log file after running the filtering step, we can comment on it then.
I can probably build a binary for you too, if necessary. What platform (windows, linux, 32, 64 bit) are you on?[/QUOTE] Windows 7 64 bit. Please compile it with large blocks, TD=80. |
[QUOTE=em99010pepe;235095]Windows 7 64 bit. Please compile it with large blocks, TD=80.[/QUOTE]
I've put the exe on my [URL="http://sites.google.com/site/bbuhrow/home/factorization-code-links"]webpage[/URL]. Also, here is some more data [URL="http://www.sendspace.com/file/xg2xv0"]77-79M[/URL]: 2871211 relations |
Thank you. Do you think we should increase target density to 100 for this case?
|
[QUOTE=em99010pepe;235226]Thank you. Do you think we should increase target density to 100 for this case?[/QUOTE]
I don't know... I've been out of the running-giant-postprocessing-step game for too long to know what the tradeoffs and breakpoints are. |
[QUOTE=bsquared;235230]I don't know... I've been out of the running-giant-postprocessing-step game for too long to know what the tradeoffs and breakpoints are.[/QUOTE]
Let's wait for Batalov or Greg's reply. They are the experts. Anyway, I just need one more day or less to finish a RPS range to start post-processing this integer. |
[QUOTE=em99010pepe;235234]Let's wait for Batalov or Greg's reply. They are the experts. Anyway, I just need one more day or less to finish a RPS range to start post-processing this integer.[/QUOTE]
Sounds good. I can get the last 1M range done by tomorrow morning (roughly 18hrs from now). |
[QUOTE=bsquared;235237]Sounds good. I can get the last 1M range done by tomorrow morning (roughly 18hrs from now).[/QUOTE]
Great! By tomorrow at this time we should have the post-processing running. |
[URL="http://www.sendspace.com/file/hjs9fm"]79-80M[/URL]: 1418953 relations.
Let's see what kind of matrix we get. |
Started the post-processing.
[code]Tue Nov 02 14:01:43 2010 added 121729 free relations Tue Nov 02 14:01:43 2010 commencing duplicate removal, pass 2 Tue Nov 02 14:04:55 2010 found 31969293 duplicates and 94229343 unique relations Tue Nov 02 14:04:55 2010 memory use: 724.8 MB Tue Nov 02 14:04:55 2010 reading ideals above 91947008 Tue Nov 02 14:04:55 2010 commencing singleton removal, initial pass [/code] |
[code]Tue Nov 02 14:41:58 2010 commencing linear algebra
Tue Nov 02 14:41:59 2010 read 9522426 cycles Tue Nov 02 14:42:15 2010 cycles contain 28095872 unique relations Tue Nov 02 14:45:45 2010 read 28095872 relations Tue Nov 02 14:46:26 2010 using 20 quadratic characters above 1073741784 Tue Nov 02 14:48:28 2010 building initial matrix Tue Nov 02 14:54:16 2010 memory use: 3903.5 MB Tue Nov 02 14:54:29 2010 read 9522426 cycles Tue Nov 02 14:54:33 2010 matrix is 9522249 x 9522426 (3237.8 MB) with weight 1008294406 (105.89/col) Tue Nov 02 14:54:33 2010 sparse part has weight 734502394 (77.13/col) Tue Nov 02 14:56:10 2010 filtering completed in 2 passes Tue Nov 02 14:56:14 2010 matrix is 9519947 x 9520124 (3237.6 MB) with weight 1008210664 (105.90/col) Tue Nov 02 14:56:14 2010 sparse part has weight 734483793 (77.15/col) Tue Nov 02 14:57:23 2010 matrix starts at (0, 0) Tue Nov 02 14:57:26 2010 matrix is 9519947 x 9520124 (3237.6 MB) with weight 1008210664 (105.90/col) Tue Nov 02 14:57:26 2010 sparse part has weight 734483793 (77.15/col) Tue Nov 02 14:57:26 2010 saving the first 48 matrix rows for later Tue Nov 02 14:57:29 2010 matrix includes 64 packed rows Tue Nov 02 14:57:31 2010 matrix is 9519899 x 9520124 (3151.1 MB) with weight 821218840 (86.26/col) Tue Nov 02 14:57:31 2010 sparse part has weight 730842123 (76.77/col) Tue Nov 02 14:57:31 2010 using block size 262144 for processor cache size 8192 kB Tue Nov 02 14:57:49 2010 commencing Lanczos iteration (4 threads) Tue Nov 02 14:57:49 2010 memory use: 3788.7 MB Tue Nov 02 14:59:20 2010 linear algebra at 0.0%, ETA 151h38m Tue Nov 02 14:59:49 2010 checkpointing every 70000 dimensions [/code] ETA will reduce as soon as I get home and overclock the CPU from 3.4 GHz (stable for LLR) to 3.7 GHz (stable for msieve). |
ETA now is 141 hours. Expect factors on 08 Nov 10.
|
Once this iteration is factored, is it a free-for-all to factor the next? Or, is there some coordinated effort to perform the (hopefully) trivial factoring of the next few composites?
Would it be rude for those not post-processing to still run curves, or just time wasting. If a factor was happened upon, via the additional curves, would that be good or bad? I am not doing the above, but I'm restarting some machines after a trip and the questions came to mind. |
[QUOTE=EdH;235684]Once this iteration is factored, is it a free-for-all to factor the next? Or, is there some coordinated effort to perform the (hopefully) trivial factoring of the next few composites?[/QUOTE]
As far as I know, it's a free-for-all with the coordination being: factors are reported in [URL="http://factordb.com/sequences.php?se=1&aq=4788"]the DB[/URL] and significant work is reported in [URL="http://www.mersenneforum.org/showthread.php?t=11615"]the thread[/URL]. e.g. informing everyone how much ECM has been run on a number or saying that you will be completing a number via GNFS. [QUOTE=EdH;235684]Would it be rude for those not post-processing to still run curves, or just time wasting. If a factor was happened upon, via the additional curves, would that be good or bad?[/QUOTE] Do you mean, for example, to run ECM curves on this c170 right now? I'd call it wasteful, not rude. Either your ECM work will be wasted because no factor was found (which is most likely), or the GNFS work will be wasted because you found a factor (unlikely, but a net 'good' result). They wouldn't be harmful, after all they just might find a factor and save us the extra couple days of post-processing, but the chances that you'd find one in this time period after the ECM we've given it is extremely slim, and whether you do the ECM or not, we'll have the answer very soon. If you were to find it via ECM now, then all the work done on the GNFS would have been wasted. But the end result is still that we have the factors. |
You can run some ecm curves on this c170 but you have less than 71 hours until I get the factors from the post-processing.
|
As shown by Tom's and Greg's tests, larger density filtering attempts will need
1. more relations to succeed (there's no free lunch) 2. more memory to run 3. but will be faster (in this size range) For every particular algebra setup, there's an optimum in the sky (probably the largest+densest matrix that still fits the memory and converges in filtering). The minimum total factoring time is even trickier - but doesn't have to be perfect if all participants have something else to switch to, anyway. I'd suggest having a few precompiled binaries at hand and try e.g. first, TD 100, and if it doesn't converge or the resulting matrix doesn't fit the memory, then fallback to 90 or 80. The good ol' 70 is also nothing to sneeze at. Sometimes it is simply the best (especially for aliquot run-of-the-mill transient gnfs projects). So, the algorithm: compile TD 70, 80, 90, 100, and also make of the not-LARGE_BLOCKS, just in case (because stages are separate, you can combine these two dimensions). Before every compilation, insert a self-descripting string in logprintf("Msieve 1.47..."), ok? We have this for example (B+D): [FONT=Arial Narrow]Sun Oct 10 05:03:41 2010 Msieve v. 1.47 SVN379 LARGE_BLOCKS zlib density100[/FONT] [FONT=Arial Narrow]Sun Oct 10 05:03:41 2010 random seeds: 3abfbd9f cd113da7[/FONT] [FONT=Arial Narrow]Sun Oct 10 05:03:41 2010 factoring 2760869837544182879281843314...95882573313215568949924071 (211 digits)[/FONT] [FONT=Arial Narrow]Sun Oct 10 05:03:42 2010 searching for 15-digit factors... etc[/FONT] My 2 cents. |
Looks like the filtering needs general command line argument parsing. Command lines for filtering and LA are in my queue.
The target density defaults to 70 because it is a good compromise that keeps the memory use down while still producing a matrix that is small (enough) and sparse. My guess is that you have to be sieving with multiple machines, and generate a very large matrix, for a higher target density to reduce the LA time by more than the extra calendar-time spent sieving. |
[QUOTE=jasonp;235744]Looks like the filtering needs general command line argument parsing. Command lines for filtering and LA are in my queue.
The target density defaults to 70 because it is a good compromise that keeps the memory use down while still producing a matrix that is small (enough) and sparse. My guess is that you have to be sieving with multiple machines, and generate a very large matrix, for a higher target density to reduce the LA time by more than the extra calendar-time spent sieving.[/QUOTE] Hmmm... maybe keep a default of 70 (seems to be a good (enough) compromise, at least for small and medium size factorizations), but allow overriding this default by command line. |
[QUOTE=EdH;235684]
Would it be rude for those not post-processing to still run curves, or just time wasting. If a factor was happened upon, via the additional curves, would that be good or bad? [/QUOTE] just a few random thoughts about ECM during GNFS: personally I am currently running sequence 10212 and currently I keep encountering cofactors around c140 to c145 (I know, that's quite a bit smaller than c170, but that's the difference between home computing and team computing.). My approach is the following: 1.) ECM to the desired extent. (e.g. for a c141 I do full B1=11e6 and maybe 500@43e6) 2.) Poly search. In parallel (on the idle threads of my i7) I do P-1 and p+1 to e.g. B1=5e9 and B2=5e15, followed by yet more ECM, until poly search has finished. 3.) sieving. And [I]only[/I] sieving. No ECM at this time. Not a single curve. 4.) postprocessing. At this time I might consider ECMing [I]something else[/I], but [B]not[/B] the cofactor of the current cofactor of the sequence, hence I would consider it wasteful if I find no ECM factor, and [I]very[/I] annoying if I would find an ECM factor just a few hours before a two-to-three-weeks-GNFS finishes to output the factors. |
[QUOTE=EdH;235684]...Would it be rude for those not post-processing to still run curves, or just time wasting. If a factor was happened upon, via the additional curves, would that be good or bad?[/QUOTE]
It won't be rude to run curves, -- but it would be rude to find a factor. :missingteeth: |
LA will finish within 4 hours.
|
1 Attachment(s)
[code]prp55: 5020628089729196540791781957106211235328810013516737941
prp115: 6236805693823202812256436162373797339317645025210784211942726700957117792931768486741474427658616159852603436208839[/code] ecm miss? |
[QUOTE=em99010pepe;236111][code]prp55: 5020628089729196540791781957106211235328810013516737941
prp115: 6236805693823202812256436162373797339317645025210784211942726700957117792931768486741474427658616159852603436208839[/code][/QUOTE]Awesome job. Thanks for the assist with the post-processing.[quote=em99010pepe]ecm miss?[/quote]I would say no. We had ~85% of t55 and it was getting hard to get any further. I [I]would[/I] have been bummed if it had turned out like my c146: three weeks to find out it was p43*p103. Now that I've got it figured out, I can put an ECM server up on the next large composite. That'll make the job easier to track.... |
| All times are UTC. The time now is 04:40. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.