![]() |
Best use of large capacitor server
The flagship server in my little cluster is a Dual-2011v3 Xeon E5-2698. 32 total cores at 2.3 Ghz and 256GB of Ram running at 2133. The primary purpose of this is as a GPU host, but that is a lot of CPU and Ram to leave idle/running DCs/running LLs?
The CPU clock seems to make running LLs only occasionally worth it vs. my faster 8 core i7 systems, which also have a bit faster 2400 memory. I've seen alright performance if I dedicate many threads (Peaks around 8) towards a single exponent but it still doesn't seem to be the best use of the system. I've considered letting the CPU/RAM do 32 threads worth of P-1 work? What is the best use of this system? |
Someone more knowledgeable than me will probably recommend elliptic curve factoring using GMP-ECM on Mersenne numbers with [url=http://www.mersenne.org/report_ecm/]no known factors[/url], as high bounds can benefit from the large amount of memory you have available. How all that is done is beyond my little head :)
|
That sort of machine could greatly help nfs@home with postprocessing the larger numbers. [url]http://mersenneforum.org/forumdisplay.php?f=98[/url]
It would be capable of doing the largest jobs although it could take a few months for some of them. |
I second Henry's suggestion, as there are tasks to be done that require 32 or even 64GB RAM, a spec in short supply. There are tasks that require even more memory, but as he said those also take months to complete (and a partial solution is not easy to transfer to someone else, since "nobody" else has 128GB or more with which to finish it). NFS post-processing is nicely parallelized for your 16 cores, so you'd do these tasks at least twice as fast as those of us with mere 6-core i7s.
Within the mersenne project, GMP-ECM is indeed a potent use of massive memory. Madpoo is likely to have info for you about how many LL tests will nearly saturate your memory, while the rest of the cores can be spent on ECM. GMP-ECM uses massive memory but is massively more efficient at finding factors; again, Madpoo experimented with it, and can give you some info if you don't find his thread about ECM. LL testing is fine for any Intel-based machine, but your server has unique capabilities due to memory capacity, whilst the CPU cycles for LL are no more potent than a similar number of cores spread over simple desktops. |
I second VBCurtis opinion about GMP-ECM. A large amount of memory like the one you have available would be very useful searching for large factors of very small exponents.
You may find lots of info here: [URL="http://www.mersenneforum.org/showthread.php?t=20092"]http://www.mersenneforum.org/showthread.php?t=20092[/URL] |
Thanks for the input, I have some attachment to the Mersenne search but I would also like to do the most good possible with these resources.
I actually have a second system that's Dual 1.8Ghz 4-core CPUs but also has significant RAM resources, this weekend I will take a look at setting up some testing and see what makes sense. Multi-month jobs are not a problem, this is a dedicated number theory research cluster. |
[QUOTE=airsquirrels;408285]Thanks for the input, I have some attachment to the Mersenne search but I would also like to do the most good possible with these resources.
I actually have a second system that's Dual 1.8Ghz 4-core CPUs but also has significant RAM resources, this weekend I will take a look at setting up some testing and see what makes sense. Multi-month jobs are not a problem, this is a dedicated number theory research cluster.[/QUOTE] My suggestion then, would be to do some smaller jobs for nfs@home working out the best setup on your machines. It might be that it makes sense not to use all the cores on each cpu and use the rest for LL/PM1/ECM. As a postprocessor you might be able to encourage the largest jobs to be factoring mersenne numbers. In fact currently they are sieving 2^1285-1. |
[QUOTE=henryzz;408300]As a postprocessor you might be able to encourage the largest jobs to be factoring mersenne numbers. In fact currently they are sieving 2^1285-1.[/QUOTE]
This suggestion is misleading. Postprocessing for a gnfs-218 (2^1285-1) cannot (and will not) be done on a single machine (even with 64 cores). There are smaller postprocessing jobs in the pipeline though for which this server can do some good. |
[QUOTE=Batalov;408301]This suggestion is misleading. Postprocessing for a gnfs-218 (2^1285-1) cannot (and will not) be done on a single machine (even with 64 cores).
There are smaller postprocessing jobs in the pipeline though for which this server can do some good.[/QUOTE] I was under the impression that jobs like that just needed enough memory and would take many months on a pc like this. What would be the timeframe/memory capacity needed for such a job(cpus similar to above)? |
Something like the Lonestar cluster (I think Lonestar has by now been retired; there are other resources at XSEDE.)
Cf. [url]https://eprint.iacr.org/2012/444.pdf[/url] (Section 5). This job is only slightly larger. GNFS-218 is like a SNFS-335 (which is 1115 bits < 1285 so GNFS is clearly appropriate; M1061 was a 1061-bit job) |
Off topic: 768 bits is still the record for biggest GNFS? and 1061 bits for SNFS?
|
| All times are UTC. The time now is 07:15. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.