![]() |
[QUOTE=pinhodecarlos;519870]What about associating the clients to teams?[/QUOTE]
I love the idea but I'm going to punt on it right now. |
[QUOTE=fivemack;519866]The daily_r curve is useful, because differentiating the total-work-done curve by eye isn't really possible; thank you
I think it would be even more useful to have the last-24-hours figure at a per-client level.[/QUOTE] Done [QUOTE=lukerichards;519897]Is there any way to add instance-1, localhost and lrichards-pre2core to this collection as well, so I can see my complete combined stats without mental arithmetic? Thanks.[/QUOTE] Done |
[QUOTE=SethTro;519950]Done[/quote]
Thank you |
VBCurtis gave me access to the logs from an older factoring effort (2^945*13-1) so I could test visualizing multiple at the sametime, check it out here [url]http://factoring.cloudygo.com/13_945/[/url]
|
So each instance will use 4 threads and 9.6GB RAM? Is there anything different that needs to be done to run multiple instances? Used this command...
./cado-nfs-client.py --bindir=build/panda --server=http://{redacted}.edu:{port} Do I just run the same thing in another xterm window in the same dir? |
Yep! Any number of instances may be run from the main CADO directory.
You can change the number of threads with --override t {n}, but 4 to 6 threads per client seems to run best. I'm running two 5-threaded instances on a 10-core, for instance. |
Are reservations needed or just an idea of instances, threads and time?
I have 1x 4t and 8x 6t running. Should be able to leave it running for at least a week. |
Nothing is needed, though the CADO software package is not as stable as we'd like; I worry that, for example, firing up 100 clients at once might make it fall over. As far as I can tell, the software can handle at least 20 connections/workunits per minute, so a small stagger while firing up an army should be fine.
Thanks for your contribution! |
Apologies but looks like my super faster laptop is slowly crunching away...lol
|
[QUOTE=pinhodecarlos;520137]Apologies but looks like my super faster laptop is slowly crunching away...lol[/QUOTE]
I too cried Havoc!, and let slip the dogs of war. Or something. Despite its name, my machine DESKTOP-C5KKONV is also a laptop. Just chewing up WUs and spitting out relations! It’s fun to contribute, even if it’s only a very small part. The cloudygo site keeps the effort fresh - thanks SethTro. I hope Vebis and buster return soon. |
[QUOTE=swellman;520139]It’s fun to contribute, even if it’s only a very small part. The cloudygo site keeps the effort fresh - thanks SethTro. I hope Vebis and buster return soon.[/QUOTE]
I wondered whose "desktop" that was!! I agree about Seth's site- it's very helpful and adds to the entertainment of this lengthy project. We've reached Q=80M. remdups update: 628.8M raw relations, 452.4M unique. I'm targeting 1.8G unique relations, so by unique-count we're more than 25% done! My reasoning: The C206 that Greg ran for us this spring had 792M unique 33LP relations, and produced a nice 43M matrix that is expected to take ~1100hr on 10 cores. Adding 1LP adds ~70% relations, 1.35G for 34LP. But we're using 35LP on one side, so I added 30% more and then rounded a bit. I picked 2.7G raw by assuming the duplication rate would match the C206. We're doing a tiny bit better so far, but duplicates rise later in a job so we'll wait quite a while before any adjustment to our target-relations count. At what size is the msieve large dataset bug expected to manifest? I can try filtering this dataset long before a matrix is possible, just to see how msieve handles the relation set around the size where problems may lie. I'd like to try a 34LP project with GGNFS/msieve in the future, so it would be nice to know if filtering is likely to work with, say, 1G unique relations. |
| All times are UTC. The time now is 22:25. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.