mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > NFS@Home

Reply
 
Thread Tools
Old 2020-09-01, 15:17   #12
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

112458 Posts
Default

I wouldn’t run either since on my machine it’s is faster to get more credits from the f version even if wu size is smaller and therefore progress is slower for project. What counts is the credit per wu per time, the rest is BS for BOINC clients.

What your side needs to ensure is that when you say the app is going to use X amount of memory it is going to use less than X. If queue ceases then level of badges must be updated, for the new “version”apps and credit also updated otherwise as I said before people need farms to reach own milestone.

Ideally wus size should be 15-20 mins and within 500MB per HT core.
I can give you a call through WhatsApp.
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 15:35   #13
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

41×109 Posts
Default

OK, I'll test a GNFS-185 (the largest job we're looking at for 15-small initially) with 15e and a couple lim combos to learn memory use; we can choose our maximum lim based on the data. Not sure I'll get to it today, but I'll have memory-use data for 15e in a day or so, and 16e by the weekend.

Workunits will always be a variable length of time, since any fixed Q-range for a WU will take quite a bit more time at GNFS-185 than GNFS-170. I don't think we can do much about that, unless the queue backend could control Q-range for a WU on a job-by-job basis; that sounds complicated.
VBCurtis is offline   Reply With Quote
Old 2020-09-01, 15:37   #14
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

3·37·43 Posts
Default

Clients like badges. Go here https://stats.free-dc.org/proj/nfs and click on each sub project to see how many clients managed to achieve each of several badge levels to all sub projects. You will see the 14e and 15e have the less number of clients, which means something is wrong.

If I’m not mistake on my laptop the 15e wus take 45 mins and are credited 44 points, 14e takes more than an hour and credits 36, 16e takes one hour and credits 130.
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 16:03   #15
chris2be8
 
chris2be8's Avatar
 
Sep 2009

3·647 Posts
Default

Another option would be to change lasieved to run 14e with -j 14. That should still use less memory than 15e but have a higher yield than 14e does now. But I don't know exactly how much more memory it would use and how much higher the yield would be.

How does varying the LIMs affect memory use? Does memory go up linearly with LIM? And how does it affect yield and speed?

Chris
chris2be8 is offline   Reply With Quote
Old 2020-09-01, 16:20   #16
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT/BST)

131578 Posts
Default

Of course, the proper thing to do would quite possibly be to switch to the CADO siever. This is probably harder, however, and would likely limit to Linux(and maybe windows with WSL 2).
henryzz is online now   Reply With Quote
Old 2020-09-01, 17:54   #17
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

22·33·19 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
Is this true even if the old d and e queues cease to exist.
There's no reason the existing queues need to cease to exist. d can remain for any small, quick jobs that come up but normally be unused. And there's certainly no reason for e_small if there's no e.

I see this as adding additional granularity for the community:

d - occasional really small (by NFS@Home standards) jobs. This can sit empty most of the time. I can post a note on the status page that it's mostly retired.
e_small - small jobs
e - medium jobs
f_small - large jobs

The community can collectively define what small, medium, and large jobs are and what limits/memory use is appropriate for each. I can add community management pages (copies of the existing ones) for the new queues.

And of course I have my privileged f queue for my own interests. I can create badges for the new queues and stick with separate stats and badges.

Because of the way the apps are compiled for BOINC, changing the -j parameter would require a recompile of the apps. Much easier to just move up to the next siever.
frmky is offline   Reply With Quote
Old 2020-09-01, 19:42   #18
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

41·109 Posts
Default

Quote:
Originally Posted by henryzz View Post
Of course, the proper thing to do would quite possibly be to switch to the CADO siever. This is probably harder, however, and would likely limit to Linux(and maybe windows with WSL 2).
Memory use is a massive issue here- CADO's 15 siever uses around 3GB per process, and 16 uses 10GB or more. One process can have many threads, but between the difficult challenge to compile for Windows, BOINC-wrappering it, and memory needs this doesn't seem likely to happen.

If such big-memory clients are numerous, Greg could go after much larger jobs than his previous record, like GNFS-230. That's the upside.
VBCurtis is offline   Reply With Quote
Old 2020-09-01, 19:42   #19
swellman
 
swellman's Avatar
 
Jun 2012

22·3·241 Posts
Default Mark on the Wall

Suggestion just to kick off discussions:


Code:
 Siever    lpbr/a     r/alim (M)	
===================================			
   d         31         134				
e_small      31         134				
e_medium     32         268				
f_small    32-33      268-536				

						
 SNFS     <225      225-235      235-255     255-270       270+
===================================================================	
deg 4      d        e_small     e_medium     f_small     f_small	
deg 5,6   DIY          d        e_small      e_medium    f_small	
deg 8      d        e_small     e_medium     f_small     f_small

						
						
GNFS      <160     160-170     170-180     180-197       197+
===================================================================	
deg 5      DIY        d        e_small    e_medium     f_small	
deg 6                                                  f_small

Last fiddled with by swellman on 2020-09-01 at 19:45
swellman is offline   Reply With Quote
Old 2020-09-01, 20:00   #20
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

477310 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
Memory use is a massive issue here- CADO's 15 siever uses around 3GB per process, and 16 uses 10GB or more. One process can have many threads, but between the difficult challenge to compile for Windows, BOINC-wrappering it, and memory needs this doesn't seem likely to happen.

If such big-memory clients are numerous, Greg could go after much larger jobs than his previous record, like GNFS-230. That's the upside.
Shall we make a trial and see how popular it will be on NFS@Home? The number of threads can be overwritten by BOINC app_config, manually set by the user how many threads to run the CADO app. I would just recommend giving extra credits...lol I can request SETI.USA friends to test it, also I will need more input from them regarding the app_config file.

Edit: trial on Linux only for now

Last fiddled with by pinhodecarlos on 2020-09-01 at 20:03
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 20:02   #21
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

446910 Posts
Default

Quote:
Originally Posted by frmky View Post
There's no reason the existing queues need to cease to exist. d can remain for any small, quick jobs that come up but normally be unused. And there's certainly no reason for e_small if there's no e.

I see this as adding additional granularity for the community:

d - occasional really small (by NFS@Home standards) jobs. This can sit empty most of the time. I can post a note on the status page that it's mostly retired.
e_small - small jobs
e - medium jobs
f_small - large jobs
The community can collectively define what small, medium, and large jobs are and what limits/memory use is appropriate for each. I can add community management pages (copies of the existing ones) for the new queues.
If d continues to exist though less used, and we create an f_small, I don't see a reason for an e_small. e would cover, say, GNFS 170 to 185, a narrow-enough range that the queue should not back up like it has previously. Moving jobs GNFS 185+ and SNFS that would go above Q=300M (350M?) to f_small leaves a range on e that should be fast-moving for medium-sized jobs.

Balancing the points/WU size so that Greg's gets by far the most work but f_small and e get similar attention will take some work. The d-queue points problem will mostly vanish when we're not sending 15e-sized jobs through there; workunits won't vary from 15 to 75 min anymore.

My proposal, then, is to create f_small, and we tweak points allocated for each of our queues.

d: GNFS under 170, SNFS clearly faster on 14e. Lim cap enforced to keep memory use low.
e: GNFS 170-185, SNFS with a lim cap (268M?) to keep WU and job length consistent.
f_small: GNFS 185+, SNFS ~280+. Maximums limited by our ability to solve the resulting matrices.

Alternate proposal using Greg's 5 queues:
d: same as above
e_small: GNFS 170-180, SNFS with Q-max capped at 200M (arbitrary measure of job size). lim capped at 134M to keep memory use low.
e: the rest of the jobs best run on 15e
f_small: Whatever is faster on 16e

Will we have more trouble/complaints trying to balance points per WU for 5 queues than 4? 4 seems enough and simpler, but perhaps I'm missing something (e.g. more queues = more badges = more participants?).
VBCurtis is offline   Reply With Quote
Old 2020-09-01, 20:08   #22
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

41×109 Posts
Default

Quote:
Originally Posted by pinhodecarlos View Post
Shall we make a trial and see how popular it will be on NFS@Home? The number of threads can be overwritten by BOINC app_config, manually set by the user how many threads to run the CADO app. I would just recommend giving extra credits...lol I can request SETI.USA friends to test it, also I will need more input from them regarding the app_config file.

Edit: trial on Linux only for now
I lack the coding skills to even consider contributing to making a BOINC wrapper for CADO-las. I'm not even sure I could manually call las from the command line successfully.

Cool to know threads-per-client can be set in the BOINC software; one hurdle removed!
VBCurtis is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
ECM change Prime95 PrimeNet 28 2020-09-02 08:16
Compiling GNFS sievers on AArch64 platform wombatman Programming 11 2017-03-11 03:12
gnfs asm version sievers illegal instruction EdH Factoring 32 2016-10-12 20:49
Name Change? Fred Lounge 8 2016-01-31 17:42
Calling all 64-bit Linux sievers! frmky NFS@Home 25 2013-10-16 15:58

All times are UTC. The time now is 09:10.

Wed Nov 25 09:10:19 UTC 2020 up 76 days, 6:21, 4 users, load averages: 2.08, 1.62, 1.38

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.