mersenneforum.org  

Go Back   mersenneforum.org > Factoring Projects > NFS@Home

Reply
 
Thread Tools
Old 2020-09-01, 20:12   #23
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

27·37 Posts
Default

Greg,

Can you pull some queries from the SQL database?

-how many clients have all apps activated? Can’t remember your breakdown priority 15/15/70?
-how many clients are only running each individual apps?

Bulk of the output comes from Gridcoin and looking at the stats majority of the wus are from the 16f.
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 20:15   #24
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×5×293 Posts
Default

Quote:
Originally Posted by chris2be8 View Post
How does varying the LIMs affect memory use? Does memory go up linearly with LIM? And how does it affect yield and speed?

Chris
Memory use rises with LIM, maybe not quite linearly but reasonably close (it might be e.g. 100-200MB overhead and linearly by LIM after that, for instance- I haven't modeled it, but that's my sense)

Memory use does not seem to change with LP size- just disk space. Yield and speed are compromised by using lim's much too small, but if we make a new queue that problem goes away by having the right siever available for every job.

It should be noted that 15e LIM =134M uses about the same memory as 14e LIM=268M, and the same is roughly true for 16e vs 15e. So, if we put jobs in the right queues with the right sievers, we may well end up using less memory overall because we've been inflating LIM choice to squeeze jobs onto 14e (higher lims generally improve yield a bit, at the expense of speed).

I'll run some tests tonight and tomorrow to get more exact memory-use numbers- my recollections and guesses may be way off!
VBCurtis is online now   Reply With Quote
Old 2020-09-01, 20:17   #25
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

27×37 Posts
Default

Quote:
Originally Posted by VBCurtis View Post
I lack the coding skills to even consider contributing to making a BOINC wrapper for CADO-las. I'm not even sure I could manually call las from the command line successfully.

Cool to know threads-per-client can be set in the BOINC software; one hurdle removed!
Basically there’s two BOINC config files you can use as support but I’m not an expert on it. I know you can call BOINC client with flags on it, PrimeGrid users use it to run LLR with threads flag otherwise you would have to create special apps for 4 threads, 8, 12, and so on. These files allow the client to setup the boinc app on there way, to balance threads along any project app. If needed I’ll get Steve Cole to help us out, he knows all on Linux and Windows.

Last fiddled with by pinhodecarlos on 2020-09-01 at 20:21
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 20:23   #26
xilman
Bamboozled!
 
xilman's Avatar
 
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across

34×53 Posts
Default

Generic comment: I am now completely confused by all this complexity.

I don't give a fig about badges, milestones, WUs, etc. All I want to know is which queue should be used for a specific range of factorizations.

When things have settled down, could someone please produce two tables which summarizes "for GNFs-xxx to GNFS-yyy use queue foo" and "for SNFS-aaa to SNFS-bbb use queue bar"?

ATM I am interested in GNFS-180 through GNFS-190 and SNFS-260 through SNFS-280. ISTM that GNFS is of general interest between 160 and 210 digits and SNFS between 240 and 320 digits.

Thanks.

Last fiddled with by xilman on 2020-09-02 at 08:19
xilman is offline   Reply With Quote
Old 2020-09-01, 20:23   #27
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

27·37 Posts
Default

Other question to ask is regarding final matrix size in function of the several sieve options since we have friends here only postprocessing 14e 29 to 30 lp size only.
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 20:27   #28
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

27×37 Posts
Default

Quote:
Originally Posted by xilman View Post
Generic comment: I am now completely confused by all this complexity.

I don't give a fig about badges, milestones, WUs, etc. All I want to know is which queue should be used for a specific range of factorizations.

When things have settled down, could someone please produce two tables which summarizes "for GNFs-xxx to GNFS-yyy use queue foo" and "for SNFS-aaa to SNFS-bbb use queue bar"?

ATM I am interested in GNFS-180 through GNFS-190 and SNFS-260 through SNFS-280. ISTM that GNFS is of general interest between 160 and 210 digits and SNFS between 240 and 320 digits.

Thanks.
Curtis needs to trial all variations specifying memory requirements and allocate to Sean table?!

Last fiddled with by xilman on 2020-09-02 at 08:19
pinhodecarlos is offline   Reply With Quote
Old 2020-09-01, 21:06   #29
frmky
 
frmky's Avatar
 
Jul 2003
So Cal

204310 Posts
Default

Quote:
Originally Posted by pinhodecarlos View Post
Shall we make a trial and see how popular it will be on NFS@Home? The number of threads can be overwritten by BOINC app_config, manually set by the user how many threads to run the CADO app. I would just recommend giving extra credits...lol I can request SETI.USA friends to test it, also I will need more input from them regarding the app_config file.

Edit: trial on Linux only for now
Incorporating CADO las isn't on the table right now. Just don't have the bandwidth for it.
frmky is offline   Reply With Quote
Old 2020-09-02, 03:36   #30
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×5×293 Posts
Default

Some memory-use data: I'm shocked that Lim choice matters more than which siever!

First test, GNFS-172 with 31LP.
Test conditions: Q=140M (bigger than lim's, to maximize memory requirement), Q-range of 1000. Memory use is the "res" column of 'top'.
16e lim's 134M/134M used 427MB. 0.238 sec/rel to find 6051 rels.
15e lim's 134M/134M used 427MB. 0.192 sec/rel to find 2693 rels.
14e lim's 134M/134M used 382MB. 0.231 sec/rel to find 1227 rels.

I then set lim's to 268M to test 14e: 681MB used at Q=270M!

Second test, GNFS-185 with 32LP.
16e lim's 268M/268M used 932MB. 0.443 sec/rel to find 3506 rels at Q=270M.
16e lim's 134M/134M used 427MB. 0.391 sec/rel to find 3488 rels at Q=140M.
15e lim's 268M/268M used 767MB. 0.426 sec/rel to find 1520 rels at Q=270M.

Note that 16e is faster than 15e at this size; 4% higher sec/rel but the smaller Q-range needed and fewer duplicates make 16e a clear winner at this one tested Q.

Someone should do similar SNFS testing on a number at the big end of 14e that we'd like to move to 15e-small or 15e, and again for a borderline 15e/16e-small candidate.

For GNFS, I conclude we could advertise 500MB memory use for d, 600MB for 15e-small, and 1GB for 15e. I doubt we'd need to use 268/400 for 16e-small, since Greg has factored GNFS-210 with lim's of 268M/268M, but we could advertise 1.25GB for 16e-small just in case.

Last fiddled with by VBCurtis on 2020-10-02 at 17:42 Reason: Changed 0.4% to 4% on line comparing 16e to 15e
VBCurtis is online now   Reply With Quote
Old 2020-09-02, 20:46   #31
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3×5×293 Posts
Default

I picked out a current SNFS job (from Chris2be8: https://mersenneforum.org/showpost.p...postcount=2464) to test memory use. This job has lim's of 134M and 31LP, going to the "d" 14e queue despite the notes estimating difficulty as GNFS-182 equivalent. Suggested sieve range is 20-200M.

At Q=180M, 14e memory use is 382MB. A 5k Q-range took 29 minutes to run on my 2.5ghz Haswell-era desktop, and found 5723 relations (yield 1.14); sec/rel was 0.296.

On the same input file and same Q 15e memory use is 426MB, so SNFS jobs (on the -r side) do not use more memory than GNFS at the same lim choice. A Q-range of 2500 found 5725 rels in 27 minutes, 0.278 sec/rel.

I tried 16e: Still 427MB memory use. sec/rel was around 0.355 when I aborted after 5 minutes.

It looks like the "d" queue has 1 WU = 15,000 Q-range. This seems a bit high; while this SNFS test-job is quite a bit tougher than will be on the future "d" queue, 90 minutes for 1WU is well outside Carlos' guidance. I hope Carlos can use this timing data to figure out what combo of WU size and points-given will make the BOINCers happy while not shifting attention away from Greg's big queue.
VBCurtis is online now   Reply With Quote
Old 2020-09-02, 21:21   #32
pinhodecarlos
 
pinhodecarlos's Avatar
 
"Carlos Pinho"
Oct 2011
Milton Keynes, UK

27×37 Posts
Default

Curtis,

14 wu length is 16k Q values
15 wu length is 4k Q values
16 wu length is 1k Q values

Greg will be the one who decides wu length and credit, I’m nothing.

Edit: 14 is d, 15 is e and 16 is f (siever)

Last fiddled with by pinhodecarlos on 2020-09-02 at 21:26
pinhodecarlos is offline   Reply With Quote
Old 2020-09-02, 22:28   #33
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

3·5·293 Posts
Default

Well, I suppose if we were using the sievers for their usual ranges, the WU size and points would be rational again. Unless Greg asks for more data to tweak points or WU length, I'll leave that be.

I've tested Lucas(1366), a SNFS-285 job currently queued on 15e. It is queued with lim's of 268M, which on 15e -r side uses 767MB again. At Q=300M, a 2kQ test sieve yielded 1512 relations and 1.014 sec/rel.

Same test with 16e, 1kQ test: 932MB memory, 30 min CPU time to find 1607 rels (1.11 sec/rel).

Small sample of data, but on this one test we have an example where 15e might be faster even on a job big enough to want lim's of 268M. So, I vote we set that as a hard cap, and advertise memory use of 900MB or less for the 15e queue.

If Greg opens a 16e_small queue for us, we can shift Lucas(1366) and the GNFS-191 in "queued" status to it as the first jobs, which would shorten the 15e backlog considerably and get us on our way!
VBCurtis is online now   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
ECM change Prime95 PrimeNet 28 2020-09-02 08:16
Compiling GNFS sievers on AArch64 platform wombatman Programming 11 2017-03-11 03:12
gnfs asm version sievers illegal instruction EdH Factoring 32 2016-10-12 20:49
Name Change? Fred Lounge 8 2016-01-31 17:42
Calling all 64-bit Linux sievers! frmky NFS@Home 25 2013-10-16 15:58

All times are UTC. The time now is 17:11.

Tue Oct 27 17:11:07 UTC 2020 up 47 days, 14:22, 1 user, load averages: 1.94, 1.83, 1.75

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.