mersenneforum.org Let's change the NFS@home sievers!
 Register FAQ Search Today's Posts Mark Forums Read

 2020-09-01, 15:17 #12 pinhodecarlos     "Carlos Pinho" Oct 2011 Milton Keynes, UK 33·181 Posts I wouldn’t run either since on my machine it’s is faster to get more credits from the f version even if wu size is smaller and therefore progress is slower for project. What counts is the credit per wu per time, the rest is BS for BOINC clients. What your side needs to ensure is that when you say the app is going to use X amount of memory it is going to use less than X. If queue ceases then level of badges must be updated, for the new “version”apps and credit also updated otherwise as I said before people need farms to reach own milestone. Ideally wus size should be 15-20 mins and within 500MB per HT core. I can give you a call through WhatsApp.
 2020-09-01, 15:35 #13 VBCurtis     "Curtis" Feb 2005 Riverside, CA 112018 Posts OK, I'll test a GNFS-185 (the largest job we're looking at for 15-small initially) with 15e and a couple lim combos to learn memory use; we can choose our maximum lim based on the data. Not sure I'll get to it today, but I'll have memory-use data for 15e in a day or so, and 16e by the weekend. Workunits will always be a variable length of time, since any fixed Q-range for a WU will take quite a bit more time at GNFS-185 than GNFS-170. I don't think we can do much about that, unless the queue backend could control Q-range for a WU on a job-by-job basis; that sounds complicated.
 2020-09-01, 15:37 #14 pinhodecarlos     "Carlos Pinho" Oct 2011 Milton Keynes, UK 488710 Posts Clients like badges. Go here https://stats.free-dc.org/proj/nfs and click on each sub project to see how many clients managed to achieve each of several badge levels to all sub projects. You will see the 14e and 15e have the less number of clients, which means something is wrong. If I’m not mistake on my laptop the 15e wus take 45 mins and are credited 44 points, 14e takes more than an hour and credits 36, 16e takes one hour and credits 130.
 2020-09-01, 16:03 #15 chris2be8     Sep 2009 202810 Posts Another option would be to change lasieved to run 14e with -j 14. That should still use less memory than 15e but have a higher yield than 14e does now. But I don't know exactly how much more memory it would use and how much higher the yield would be. How does varying the LIMs affect memory use? Does memory go up linearly with LIM? And how does it affect yield and speed? Chris
 2020-09-01, 16:20 #16 henryzz Just call me Henry     "David" Sep 2007 Cambridge (GMT/BST) 2·29·101 Posts Of course, the proper thing to do would quite possibly be to switch to the CADO siever. This is probably harder, however, and would likely limit to Linux(and maybe windows with WSL 2).
2020-09-01, 17:54   #17
frmky

Jul 2003
So Cal

82216 Posts

Quote:
 Originally Posted by VBCurtis Is this true even if the old d and e queues cease to exist.
There's no reason the existing queues need to cease to exist. d can remain for any small, quick jobs that come up but normally be unused. And there's certainly no reason for e_small if there's no e.

d - occasional really small (by NFS@Home standards) jobs. This can sit empty most of the time. I can post a note on the status page that it's mostly retired.
e_small - small jobs
e - medium jobs
f_small - large jobs

The community can collectively define what small, medium, and large jobs are and what limits/memory use is appropriate for each. I can add community management pages (copies of the existing ones) for the new queues.

And of course I have my privileged f queue for my own interests. I can create badges for the new queues and stick with separate stats and badges.

Because of the way the apps are compiled for BOINC, changing the -j parameter would require a recompile of the apps. Much easier to just move up to the next siever.

2020-09-01, 19:42   #18
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

3·1,579 Posts

Quote:
 Originally Posted by henryzz Of course, the proper thing to do would quite possibly be to switch to the CADO siever. This is probably harder, however, and would likely limit to Linux(and maybe windows with WSL 2).
Memory use is a massive issue here- CADO's 15 siever uses around 3GB per process, and 16 uses 10GB or more. One process can have many threads, but between the difficult challenge to compile for Windows, BOINC-wrappering it, and memory needs this doesn't seem likely to happen.

If such big-memory clients are numerous, Greg could go after much larger jobs than his previous record, like GNFS-230. That's the upside.

 2020-09-01, 19:42 #19 swellman     Jun 2012 32·331 Posts Mark on the Wall Suggestion just to kick off discussions: Code:  Siever lpbr/a r/alim (M) =================================== d 31 134 e_small 31 134 e_medium 32 268 f_small 32-33 268-536 SNFS <225 225-235 235-255 255-270 270+ =================================================================== deg 4 d e_small e_medium f_small f_small deg 5,6 DIY d e_small e_medium f_small deg 8 d e_small e_medium f_small f_small GNFS <160 160-170 170-180 180-197 197+ =================================================================== deg 5 DIY d e_small e_medium f_small deg 6 f_small Last fiddled with by swellman on 2020-09-01 at 19:45
2020-09-01, 20:00   #20
pinhodecarlos

"Carlos Pinho"
Oct 2011
Milton Keynes, UK

33·181 Posts

Quote:
 Originally Posted by VBCurtis Memory use is a massive issue here- CADO's 15 siever uses around 3GB per process, and 16 uses 10GB or more. One process can have many threads, but between the difficult challenge to compile for Windows, BOINC-wrappering it, and memory needs this doesn't seem likely to happen. If such big-memory clients are numerous, Greg could go after much larger jobs than his previous record, like GNFS-230. That's the upside.
Shall we make a trial and see how popular it will be on NFS@Home? The number of threads can be overwritten by BOINC app_config, manually set by the user how many threads to run the CADO app. I would just recommend giving extra credits...lol I can request SETI.USA friends to test it, also I will need more input from them regarding the app_config file.

Edit: trial on Linux only for now

Last fiddled with by pinhodecarlos on 2020-09-01 at 20:03

2020-09-01, 20:02   #21
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

3×1,579 Posts

Quote:
 Originally Posted by frmky There's no reason the existing queues need to cease to exist. d can remain for any small, quick jobs that come up but normally be unused. And there's certainly no reason for e_small if there's no e. I see this as adding additional granularity for the community: d - occasional really small (by NFS@Home standards) jobs. This can sit empty most of the time. I can post a note on the status page that it's mostly retired. e_small - small jobs e - medium jobs f_small - large jobs The community can collectively define what small, medium, and large jobs are and what limits/memory use is appropriate for each. I can add community management pages (copies of the existing ones) for the new queues.
If d continues to exist though less used, and we create an f_small, I don't see a reason for an e_small. e would cover, say, GNFS 170 to 185, a narrow-enough range that the queue should not back up like it has previously. Moving jobs GNFS 185+ and SNFS that would go above Q=300M (350M?) to f_small leaves a range on e that should be fast-moving for medium-sized jobs.

Balancing the points/WU size so that Greg's gets by far the most work but f_small and e get similar attention will take some work. The d-queue points problem will mostly vanish when we're not sending 15e-sized jobs through there; workunits won't vary from 15 to 75 min anymore.

My proposal, then, is to create f_small, and we tweak points allocated for each of our queues.

d: GNFS under 170, SNFS clearly faster on 14e. Lim cap enforced to keep memory use low.
e: GNFS 170-185, SNFS with a lim cap (268M?) to keep WU and job length consistent.
f_small: GNFS 185+, SNFS ~280+. Maximums limited by our ability to solve the resulting matrices.

Alternate proposal using Greg's 5 queues:
d: same as above
e_small: GNFS 170-180, SNFS with Q-max capped at 200M (arbitrary measure of job size). lim capped at 134M to keep memory use low.
e: the rest of the jobs best run on 15e
f_small: Whatever is faster on 16e

Will we have more trouble/complaints trying to balance points per WU for 5 queues than 4? 4 seems enough and simpler, but perhaps I'm missing something (e.g. more queues = more badges = more participants?).

2020-09-01, 20:08   #22
VBCurtis

"Curtis"
Feb 2005
Riverside, CA

3·1,579 Posts

Quote:
 Originally Posted by pinhodecarlos Shall we make a trial and see how popular it will be on NFS@Home? The number of threads can be overwritten by BOINC app_config, manually set by the user how many threads to run the CADO app. I would just recommend giving extra credits...lol I can request SETI.USA friends to test it, also I will need more input from them regarding the app_config file. Edit: trial on Linux only for now
I lack the coding skills to even consider contributing to making a BOINC wrapper for CADO-las. I'm not even sure I could manually call las from the command line successfully.

Cool to know threads-per-client can be set in the BOINC software; one hurdle removed!

 Similar Threads Thread Thread Starter Forum Replies Last Post Prime95 PrimeNet 28 2020-09-02 08:16 wombatman Programming 11 2017-03-11 03:12 EdH Factoring 32 2016-10-12 20:49 Fred Lounge 8 2016-01-31 17:42 frmky NFS@Home 25 2013-10-16 15:58

All times are UTC. The time now is 14:13.

Mon Apr 19 14:13:39 UTC 2021 up 11 days, 8:54, 0 users, load averages: 2.78, 2.89, 2.71