![]() |
I realise I was a bit callow about attribution for 6+383, and should be more careful here.
Would something like "mersenneforum, Lehigh University + Womack" be acceptable? I could go the whole hog and list all the contributors, though that starts getting to particle-physics levels of silliness for numbers not much bigger than this. |
[QUOTE=fivemack;129962]
Would something like "mersenneforum, Lehigh University + Womack" be acceptable? ...[/QUOTE] Maybe Richard would consider extending the franchise, and let us use Womack+mersenneforum/NFSNET (?). For comparison, perhaps I could share Tom's pm on my first range, for which there appear to be some 20 subranges in which the relns didn't get recorded --- I hit a filespace quota overnight, and cleared a storred NFSNET factorbase in the morning, after I found out. So those 20 ranges had a reasonable first half (or so), missed most of the second half, then did a last 30min.-1hr. If I understood the pm correctly, I may be able to see the specific 20 subranges that need re-running. Way different than the tight control in the NFSNET server/client; but seems to be workable, with some extra effort on Tom's part. I should do better this time; I'm running in /tmp. -Bruce (credit's not yet an issue, until I see how things look.) |
[QUOTE=bdodson;129986]I should do better this time; I'm running in /tmp.[/QUOTE]On a number of systems /tmp is cleared on a reboot; on others /tmp is a ramdisk.
Your system may preserve /tmp over a reboot but I'd check to make sure. Paul |
[QUOTE=fivemack;129962]I realise I was a bit callow about attribution for 6+383, and should be more careful here.[/QUOTE]Better to be callow than callous.
|
I'll take 93-100
|
[QUOTE=fivemack;129612]
bdodson 90-91 (with gaps: 1431138 relations collected) bdodson 91-93[/QUOTE] [code] 86002582 Mar 28 07:11 3+512.90M-91M.gz 176046496 Mar 28 07:11 3+512.91M-92M.gz 178093292 Mar 28 17:43 3+512.92M-93M.gz [/code] Looks like I got less than half before my filespace ran out. The person doing the installation reports adjusting the default setting for future initial accounts. I re-ran the three ranges Tom located; not sure it's worth looking for the other 17. Paul's correct, /tmp isn't stable; I'm archiving files elsewhere. -Bruce |
[QUOTE=fivemack;129832]The Lonestar cluster seems quite an exciting machine: [url]http://www.tacc.utexas.edu/services/userguides/lonestar/[/url] [/QUOTE]
I need to get me one of these. :showoff: |
[QUOTE=fivemack;129612]
[b]Reservations[/b] ... **bdodson 90-91 (with gaps: 1431138 relations collected) **bdodson 91-92 (no visible gaps; 2929210 relations) bdodson 92-93 bsquared 93-100[/QUOTE] I sent in the 3 sub-ranges identified as missing, but I'm dis-inclined to try tracking down the other ones. Instead, I'm reserving bdodson 123-124. If everything below 120M gets run, I'll replace the missing 90M subranges with q's another 20M out past what's running. Wish me luck with getting relns here within the next day or so. I seem to have lasieve running on the Opteron cluster under condor (5hrs of 12hrs or so); except that I won't know whether the output is "tranferred" back from the local node to the condor master until the range finishes (or, otherwise, just vanishes). If this doesn't work, I'll run these q's on the new quadcore cluster (which permits both use of the condor scheduler and "interactive" logins, unlike the Opteron cluster and the "old" quadcore cluster). I also managed to get condor to accept a submisson on the other ("old") quadcore cluster; so perhaps I'll see whether those ranges run (and, if so, whether I've managed to correctly instruct condor what to do with the output). Not clear how many jobs the scheduler will support, or whether I'll be able to figure out what to do with a _lot_ more data; but those would be problems I'd be happy to have. To hit a quarter of the max that Greg hit, I'd want something like 25-new-quad, 25-Opteron, and 50-old-quad; so as I was saying, wish me luck on today's 2+5 cpus. -Bruce |
As far as data goes, I have a dedicated 60GB partition on the upload machine, which is on a 100Mbit internet connection in Telehouse in London, so if you have some way of pushing straight from the sievers to ftp, I ought to be able to handle any even quite unreasonable amount of data: a whole lpbr=31 factorisation takes IME under 30GB uncompressed, no more than 15G compressed. There may of course be policy at your end making that more difficult, or other bottlenecks.
|
I'll take the range from 50-60.
edit: Oops, just realized I don't actually have gnfs-lasieve4I15e. Does anyone have a cygwin or mingw or windows compiled version of this? Or, I have the sources to ggnfs, can someone tell me how to compile it so that it produces the 15e executable? Also (what I would like more!), does anyone have (or can anyone make) 64-bit binaries of 14e and 15e for Windows? That would be awesome! |
[quote=WraithX;130294]I'll take the range from 50-60.
edit: Oops, just realized I don't actually have gnfs-lasieve4I15e. Does anyone have a cygwin or mingw or windows compiled version of this? Or, I have the sources to ggnfs, can someone tell me how to compile it so that it produces the 15e executable? Also (what I would like more!), does anyone have (or can anyone make) 64-bit binaries of 14e and 15e for Windows? That would be awesome![/quote] See post #8 in this thread: [URL]http://www.mersenneforum.org/showthread.php?t=10003[/URL] I've compiled it for windows under mingw, but it was not very easy and I don't think I could give concise instructions. I basically did this [CODE] do { err = compile; if (err) { hackish attempt to fix error; } while (err); [/CODE] |
| All times are UTC. The time now is 22:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.