![]() |
[QUOTE=Andi47;156355]I will take 40-42 (A side)
I will start sieving on Jan 7th or 8th, but I want to grab one of the low ranges which (presumably?) take less memory due to the lower alim, as long as these ranges are available. (please correct me if I'm wrong about memory useage.) (these huge GNFSes are almost maxing out what is the highest memory usage for the siever to run (almost) invisible on my office box, i.e. to not slow down other programs too much)[/QUOTE] I have started my range today in the morning on my office PC. After running for ~6:40 hours I estimate that it will run for ~693 hours = 28-29 days (--> ETA: Feb. 6th). Is this too much? [i]Iff[/i] yes, than I could reduce my reservation accordingly. |
I think a Feb 6th ETA is fine, this is quite a large project and I expect sieving to take more than a month.
|
Bad news?
We just noticed that we are almost finished with both sides of 30-35M, but we were supposed to be doing both sides of 25-30M. Should we jump on 25-30M now? It will take us about 12 days to do the work. (Even our examples earlier in this thread used the wrong range. Talk about dumb!) :cry: |
1 Attachment(s)
We uploaded a few of our files, in 1M increments. We used lowercase to set our files apart from the ones that are already there.
We noticed our files are significantly smaller than the other files that are in 1M increments. We sure hope we are not doing something wrong. In the meantime, we have started on the 25-30M range. We think we can complete it in time and not cause the project any delays. |
it was my range but ##it happens. no problem
Ouch. I'll have a look at your files, and compare to mine. Yes, my files are bigger than yours and similar in size to JF's. So your files may be sparser than mine (my wild guess is that they were sieved with 14e, but I am not casting any stones - I've done that myself a few times) ...but it still doesn't make any sense for me to finish over-sieving this range with 15e, so I'll take another range instead.
I've already uploaded R30-31M.bz2 and A30-31M.bz2 and those are in line with sizes of JF's files, who this time led the pack with the first pancakes. I also have sieved as far as 30-31.8 on both sides and will upload those and then will get another range. It's OK. It happens to any of us. <S> |
Looking at the uploads, xyzzy's runs do appear to have been done with 14e rather than 15e; please use 15e for the 25-30 range. It's quite a lot slower than 14e, but the extra yield over 14e means that we can use a shorter Q-range without running into duplicate relation issues, and get the whole thing finished quicker.
|
The binary we downloaded says 15e. How can we know for sure we have the right one?
We am using the binary from this post: [URL]http://www.mersenneforum.org/showpost.php?p=152310&postcount=5[/URL] ([URL]http://snp.gnf.org/gnfs-lasieve4I15e.zip[/URL]) We renamed our existing binary and tested it: [code]$ mv gnfs-lasieve4I15e gnfs-lasieve4I15e~ $ wget -q http://snp.gnf.org/gnfs-lasieve4I15e.zip $ unzip -q gnfs-lasieve4I15e.zip $ rm *zip $ ls -l g* -rwxrwxrwx 1 m m 820000 2008-11-07 17:11 gnfs-lasieve4I15e -rwxr-xr-x 1 m m 820000 2008-11-07 17:10 gnfs-lasieve4I15e~ $ md5sum g* e2396ee3232c5b5eea53d785d4f89cb2 gnfs-lasieve4I15e e2396ee3232c5b5eea53d785d4f89cb2 gnfs-lasieve4I15e~[/code]All we want is a binary that is optimized for a Phenom quad running 64-bit Linux. |
Here are 2 questions that are less important than the stuff above:
[LIST][*]In the polynomial file we have "alim: 125000000" and "rlim: 100000000". Obviously, we need to lower the alim value when working the the "A" side and we need to lower the rlim value when working the "R" side, if, for example, we are working at 25M. Is it safe to lower both in the polynomial file at the same time, so we can use a shared polynomial file?[*]Is there any potential loss (other than some overhead time) if you use smaller chunks versus larger chunks? We've played with 100,000 and 10,000 chunks so far. We've added up the elapsed time from the 10,000 chunks and the total is pretty close to a 100,000 chunk. Are we losing an data in the process using a smaller chunk?[/LIST] |
[QUOTE=Xyzzy;158139]Here are 2 questions that are less important than the stuff above:
[LIST][*]In the polynomial file we have "alim: 125000000" and "rlim: 100000000". Obviously, we need to lower the alim value when working the the "A" side and we need to lower the rlim value when working the "R" side, if, for example, we are working at 25M. Is it safe to lower both in the polynomial file at the same time, so we can use a shared polynomial file?[/LIST][/QUOTE] I think this would result in a significant loss of relations found per q. (test for example running gnfs-lasieve4I15e -a inputfile -o outputfile -f 25000000 -c 5000 both with rlim = 100M and rlim = 25M) |
fivemack didnt you have a gnfs-lasieve4I??e binary that would sieve below the factorbase bound at one point
|
[quote]I think this would result in a significant loss of relations found per q. (test for example running gnfs-lasieve4I15e -a inputfile -o outputfile -f 25000000 -c 5000 both with rlim = 100M and rlim = 25M)[/quote]We are running the test you suggested. It is not done yet, but something tells us that you are right. Look at the report from "top" below. Note the memory used difference. The "-a" argument in the command line shows what is in the polynomial file.
[code] PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6947 m 25 0 540m 231m 536 R 100 4.0 5:52.57 ./gnfs-lasieve4I15e -a a025r100 -f 25000000 -c 5000 -o a 6949 m 25 0 620m 270m 512 R 100 4.6 4:29.90 ./gnfs-lasieve4I15e -r a125r025 -f 25000000 -c 5000 -o r 6960 m 25 0 263m 100m 508 R 100 1.7 2:19.26 ./gnfs-lasieve4I15e -a a025r025 -f 25000000 -c 5000 -o ax 6967 m 25 0 263m 100m 496 R 100 1.7 1:19.60 ./gnfs-lasieve4I15e -r a025r025 -f 25000000 -c 5000 -o rx[/code] |
| All times are UTC. The time now is 22:04. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.