mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   NFS@Home (https://www.mersenneforum.org/forumdisplay.php?f=98)
-   -   BOINC NFS sieving - RSALS (https://www.mersenneforum.org/showthread.php?t=12458)

pinhodecarlos 2012-04-10 01:29

1 Attachment(s)
1567_67_minus1 done.

debrouxl 2012-04-10 05:49

Thanks for the post-processing help :smile:

[quote]I would suggest adding a remdups server side run in the rsals_data/ folders (triggered by 'all jobs received'; and/or once manually on the grandfathered projects).[/quote]
That's a good suggestion, for the reasons you state, and it would be easy to weave it in the existing cron jobs (or create a new one)...

However, I don't think I can set it up for anything but the most trivial tasks (27- and 28-bit LPs, perhaps ?) on this poor little server, with low CPU (Via C3), RAM (1 GB) and disk space (160 GB, and the image shown on the RSALS BOINC status page underestimates the disk occupation ratio by 5%) resources, and high instability...

Batalov 2012-04-10 05:55

[QUOTE=debrouxl;295992]Thanks for the post-processing help :smile:


That's a good suggestion, for the reasons you state, and it would be easy to weave it in the existing cron jobs (or create a new one)...

However, I don't think I can set it up for anything but the most trivial tasks (27- and 28-bit LPs, perhaps ?) on this poor little server, with low CPU (Via C3), RAM (1 GB) and disk space (160 GB, and the image shown on the RSALS BOINC status page underestimates the disk occupation ratio by 5%) resources, and high instability...[/QUOTE]
It is very fast, though, and not hard on resources; it is much gentler than msieve. You may want to try on a few cases; you can lower "222" to "100" or even "50" - this will still run, will use less memory and will leave just a few redundant relations in the output file (it never removes too much; it can remove a bit too little). After remdups4 is done, you can remove the larger, original file.

pinhodecarlos 2012-04-10 07:12

Taking 563_79_minus1.

pinhodecarlos 2012-04-10 10:51

Lionel,

ETA 541_79_minus1 is 8 hours.
In queue: 563_79_minus1.

I was wondering instead of reserving 29 LPs jobs I could do a 30 LPs one after I am done with 563_79_minus1, is 1753_71_minus1 reserved for Pace? Please let me know by the end of the day. Thank you.

Carlos

Batalov 2012-04-10 18:15

[QUOTE=Batalov;295928]1627_67_minus1 is done (that should help a bit with the disk space), and C147 seems ready - I am finishing it now.
I'll finish 1973_61_minus1 when it's done upping.[/QUOTE]
All done.

pinhodecarlos 2012-04-10 18:42

1 Attachment(s)
541_79_minus1 done.

debrouxl 2012-04-10 19:13

Thanks :smile:

I've reserved 1753_71_minus1 for you, because I'm not aware that Pace Nielsen is post-processing it.

pinhodecarlos 2012-04-10 21:56

[QUOTE=debrouxl;296050]Thanks :smile:

I've reserved 1753_71_minus1 for you, because I'm not aware that Pace Nielsen is post-processing it.[/QUOTE]

Thank you. Meanwhile 563_79_minus1 will take longer than I expected, currently more than 25 hours so 1753_71_minus1 is pointed to start at 1-2 am of 12/04/2012. The latter will take probably more than a week to process.

debrouxl 2012-04-11 06:18

I'm waiting for some more relations to trickle in, and I'll run remdups4 on 1973_61_minus1.
EDIT: well, since remdups takes only several minutes when redirecting the output to /dev/null, I ran it anyway :smile:
[code]gzip -dcfq 1973_61_minus1.dat.gz | ./remdups4 200 -v > /dev/null
Starting program at Wed Apr 11 08:18:36 2012
allocated 1310692 bytes for pointers
allocated 524288000 bytes for arrays
Wed Apr 11 08:18:42 2012 0.5M unique relns 0.01M duplicate relns (+0.01M, avg D/U ratio in block was 1.8%)
Wed Apr 11 08:18:46 2012 1.0M unique relns 0.04M duplicate relns (+0.03M, avg D/U ratio in block was 5.2%)
Wed Apr 11 08:18:51 2012 1.5M unique relns 0.08M duplicate relns (+0.04M, avg D/U ratio in block was 8.2%)
Wed Apr 11 08:18:56 2012 2.0M unique relns 0.13M duplicate relns (+0.05M, avg D/U ratio in block was 10.7%)
Wed Apr 11 08:19:01 2012 2.5M unique relns 0.20M duplicate relns (+0.07M, avg D/U ratio in block was 13.1%)
Wed Apr 11 08:19:10 2012 3.0M unique relns 0.27M duplicate relns (+0.08M, avg D/U ratio in block was 15.6%)
Wed Apr 11 08:19:17 2012 3.5M unique relns 0.37M duplicate relns (+0.10M, avg D/U ratio in block was 19.2%)
Wed Apr 11 08:19:23 2012 4.0M unique relns 0.48M duplicate relns (+0.11M, avg D/U ratio in block was 22.8%)
Wed Apr 11 08:19:28 2012 4.5M unique relns 0.63M duplicate relns (+0.14M, avg D/U ratio in block was 28.7%)
Wed Apr 11 08:19:34 2012 5.0M unique relns 0.79M duplicate relns (+0.16M, avg D/U ratio in block was 32.5%)
Wed Apr 11 08:19:40 2012 5.5M unique relns 0.97M duplicate relns (+0.18M, avg D/U ratio in block was 36.9%)
Wed Apr 11 08:19:46 2012 6.0M unique relns 1.17M duplicate relns (+0.20M, avg D/U ratio in block was 39.7%)
Wed Apr 11 08:19:53 2012 6.5M unique relns 1.39M duplicate relns (+0.22M, avg D/U ratio in block was 44.3%)
Wed Apr 11 08:19:59 2012 7.0M unique relns 1.63M duplicate relns (+0.23M, avg D/U ratio in block was 46.5%)
Wed Apr 11 08:20:11 2012 7.5M unique relns 1.89M duplicate relns (+0.27M, avg D/U ratio in block was 53.5%)
Wed Apr 11 08:20:21 2012 8.0M unique relns 2.17M duplicate relns (+0.28M, avg D/U ratio in block was 55.1%)
Wed Apr 11 08:20:29 2012 8.5M unique relns 2.43M duplicate relns (+0.26M, avg D/U ratio in block was 51.5%)
Wed Apr 11 08:20:36 2012 9.0M unique relns 2.70M duplicate relns (+0.27M, avg D/U ratio in block was 54.5%)
Wed Apr 11 08:20:43 2012 9.5M unique relns 3.01M duplicate relns (+0.32M, avg D/U ratio in block was 63.1%)
Wed Apr 11 08:20:52 2012 10.0M unique relns 3.39M duplicate relns (+0.37M, avg D/U ratio in block was 74.8%)
Wed Apr 11 08:21:00 2012 10.5M unique relns 3.76M duplicate relns (+0.37M, avg D/U ratio in block was 73.5%)
Wed Apr 11 08:21:12 2012 11.0M unique relns 4.14M duplicate relns (+0.39M, avg D/U ratio in block was 77.6%)
Wed Apr 11 08:21:21 2012 11.5M unique relns 4.50M duplicate relns (+0.36M, avg D/U ratio in block was 71.4%)
Wed Apr 11 08:21:30 2012 12.0M unique relns 4.86M duplicate relns (+0.36M, avg D/U ratio in block was 72.7%)
Wed Apr 11 08:21:38 2012 12.5M unique relns 5.26M duplicate relns (+0.39M, avg D/U ratio in block was 78.5%)
Found 12793586 unique, 5500219 duplicate (30.1% of total), and 20 bad relations.
Largest dimension used: 71 of 200
Average dimension used: 39.0 of 200
Terminating program at Wed Apr 11 08:21:43 2012[/code]

IOW, among the estimated 378492 relations that remain to be returned, at best, 100K will be unique...

Batalov 2012-04-11 06:26

...mais porquoi?! C'en est fait!


Sorry, my terse "[URL="http://mersenneforum.org/showpost.php?p=296043&postcount=270"]All done[/URL]" meant all three of them.
(I usually check the slope of .projections and when it is already trickling, check my projections, then make a fetch and finalize).

P.S. I [B]love[/B] the D/U ratios in your output! Very enlightening. This is where we do :doh!: and take a note: next time a 28-bit for this size. But this one managed to crawl in on the last vapors to the finish line.


All times are UTC. The time now is 22:40.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.