![]() |
|
|
#133 | |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3·17·97 Posts |
Quote:
How further are we on sieving? |
|
|
|
|
|
|
#134 |
|
"Curtis"
Feb 2005
Riverside, CA
28×19 Posts |
About half a percent. We've done just over 1MQ. Yield is 10.0 at Q=9M.
The server burped last night, wasn't sending work for 7-8 hr. I killed and restarted CADO, has been fine since. No idea what happened; my home computer runs a client and also reported no problem (it just stalled, using 0% CPU). |
|
|
|
|
|
#135 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3·17·97 Posts |
Up and running, 4 threads (wish it was 8), 9.6GB used.
Differences from last installation: 1) 100 GB instead of 150 GB for VM 2) Lunbuntu instead of Ubuntu 3) allocated 14GB instead of 13.something. 4) followed Ed guide with Paul supporting me once again. Last fiddled with by pinhodecarlos on 2019-05-26 at 21:44 |
|
|
|
|
|
#136 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3×17×97 Posts |
Curtis,
Server decides how many threads to run on the client side therefore if I want to run more clientes of 4 threads I’ll have to copy CADO folder to another place, is this correct? Naive question from Windows user: I suspect I don’t have to compile CADO again. Is there away to limit memory on the client side, just thinking if having more threads running using less memory would be more effective than only running 4 threads using 8.9GB, like 8 threads running 12-14GB. What’s the best solution (threads, clients, memory) for a 32c/64t with 64GB? |
|
|
|
|
|
#137 |
|
"Curtis"
Feb 2005
Riverside, CA
28×19 Posts |
Carlos-
You can run any number of clients from the single CADO folder, no need to copy anything. I've run 6 client instances and the server instance all from one place (on 7 command lines, all within the same folder). You're correct that 4 threads is a server choice, picked to allow for the broadest number of users to be able to contribute- a quad-core with 16GB is a fairly common setup for a desktop 'round these parts. RichD, swellman, and I all have 16GB machines with 4-6 cores. For my larger machines, I'm planning to run as many clients as fit in memory while using the spare cores for other "regular" projects. For instance, I have a dual 10-core 64GB on which I'll run two clients on each socket, with a couple cores left over for LLR work or prime-sieving work. In your case, I think you only get to use half the machine for this CADO factorization, though a 5th client might work. Once we reach Q=220M or so, the factorization will switch to I=15, memory use will drop to ~3GB per 4-threaded client, and we'll all have much more flexibility to use all our cores. My test-sieving indicates Q=8-220M on I=16 for ~1500M relations followed by 220-1000M on I=15 for ~1200M relations. Your big machine can then run 8 or 16 clients, depending on whether HT proves helpful. |
|
|
|
|
|
#138 |
|
"Carlos Pinho"
Oct 2011
Milton Keynes, UK
3·17·97 Posts |
How many 4 threads instances have you now connected to your server?
Why not set up another server to distribute work from Q=220M onwards so the smaller players like me to have the ability to run more threads with lower memory usage. Will it be beneficial splitting the sieving range? Just wondering... Last fiddled with by pinhodecarlos on 2019-05-27 at 20:24 |
|
|
|
|
|
#139 |
|
"Curtis"
Feb 2005
Riverside, CA
10011000000002 Posts |
If I knew how to recombine the relations folders in a way that CADO's automated postprocessing would be certain to find, I would run the I=16 and I=15 jobs at the same time. I don't trust that I wouldn't be stuck with two half-completed jobs and far too many relations to let msieve bail me out, so I'm not taking that chance.
There are currently 6 clients working; 3 of mine, RichD, Luke, and you. Expected soon are swellman, 2 more from me, and 2x ET_. Expected later are a bunch from fivemack. A slow ramp-up in clients is welcome; if you're ready to aim part of that 32-core at us, go right ahead. |
|
|
|
|
|
#140 |
|
"Seth"
Apr 2019
293 Posts |
I've have a 32-core machine that I want to try out NFS factoring with.
I was hoping there would be more complete CADO instructions (I've followed the how-to-install on ubuntu + dry ran some small C100 numbers). If you could give slightly more detailed instructions on where to find the poly, what config to point at... I'm happy to throw another client at it. |
|
|
|
|
|
#141 | |
|
Sep 2008
Kansas
17·199 Posts |
Quote:
Later in this project the memory requirements will drop significantly and perhaps you can utilize your rig to the fullest.
|
|
|
|
|
|
|
#142 |
|
"Seth"
Apr 2019
293 Posts |
htop is showing 9 gigs of memory usage per innovation.
I measured with time -f "Time: %e, Maxmem %M" which showed ~11 gigs of usage at somepoint during a workunit. Does this mean I'm a low end system :/ and safe to run (MAX_MEM / 12 gig) instances? |
|
|
|
|
|
#143 |
|
Sep 2008
Kansas
17·199 Posts |
I wouldn't call yours a low end system. I have 4-core 16 GB boxes, good for one instance. You can run several instances from the same folder until you run out of memory. Don't subscribe to more memory than you physically have because that will only slow you down with thrashing/swapping requests. Maybe leave a little memory leftover for other projects.
|
|
|
|
![]() |
| Thread Tools | |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Coordination thread for redoing P-1 factoring | ixfd64 | Lone Mersenne Hunters | 81 | 2021-04-17 20:47 |
| big job planning | henryzz | Cunningham Tables | 16 | 2010-08-07 05:08 |
| Sieving reservations and coordination | gd_barnes | No Prime Left Behind | 2 | 2008-02-16 03:28 |
| Sieved files/sieving coordination | gd_barnes | Conjectures 'R Us | 32 | 2008-01-22 03:09 |
| Special Project Planning | wblipp | ElevenSmooth | 2 | 2004-02-19 05:25 |