mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware

Reply
 
Thread Tools
Old 2015-08-18, 03:00   #1
airsquirrels
 
airsquirrels's Avatar
 
"David"
Jul 2015
Ohio

11×47 Posts
Default Best use of large capacitor server

The flagship server in my little cluster is a Dual-2011v3 Xeon E5-2698. 32 total cores at 2.3 Ghz and 256GB of Ram running at 2133. The primary purpose of this is as a GPU host, but that is a lot of CPU and Ram to leave idle/running DCs/running LLs?

The CPU clock seems to make running LLs only occasionally worth it vs. my faster 8 core i7 systems, which also have a bit faster 2400 memory. I've seen alright performance if I dedicate many threads (Peaks around 8) towards a single exponent but it still doesn't seem to be the best use of the system.

I've considered letting the CPU/RAM do 32 threads worth of P-1 work?

What is the best use of this system?
airsquirrels is offline   Reply With Quote
Old 2015-08-18, 05:33   #2
Mark Rose
 
Mark Rose's Avatar
 
"/X\(‘-‘)/X\"
Jan 2013
Ͳօɾօղէօ

281610 Posts
Default

Someone more knowledgeable than me will probably recommend elliptic curve factoring using GMP-ECM on Mersenne numbers with no known factors, as high bounds can benefit from the large amount of memory you have available. How all that is done is beyond my little head :)
Mark Rose is offline   Reply With Quote
Old 2015-08-18, 19:48   #3
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT)

10110001111102 Posts
Default

That sort of machine could greatly help nfs@home with postprocessing the larger numbers. http://mersenneforum.org/forumdisplay.php?f=98
It would be capable of doing the largest jobs although it could take a few months for some of them.
henryzz is online now   Reply With Quote
Old 2015-08-18, 20:22   #4
VBCurtis
 
VBCurtis's Avatar
 
"Curtis"
Feb 2005
Riverside, CA

102228 Posts
Default

I second Henry's suggestion, as there are tasks to be done that require 32 or even 64GB RAM, a spec in short supply. There are tasks that require even more memory, but as he said those also take months to complete (and a partial solution is not easy to transfer to someone else, since "nobody" else has 128GB or more with which to finish it). NFS post-processing is nicely parallelized for your 16 cores, so you'd do these tasks at least twice as fast as those of us with mere 6-core i7s.

Within the mersenne project, GMP-ECM is indeed a potent use of massive memory. Madpoo is likely to have info for you about how many LL tests will nearly saturate your memory, while the rest of the cores can be spent on ECM. GMP-ECM uses massive memory but is massively more efficient at finding factors; again, Madpoo experimented with it, and can give you some info if you don't find his thread about ECM.

LL testing is fine for any Intel-based machine, but your server has unique capabilities due to memory capacity, whilst the CPU cycles for LL are no more potent than a similar number of cores spread over simple desktops.
VBCurtis is offline   Reply With Quote
Old 2015-08-19, 12:56   #5
lycorn
 
lycorn's Avatar
 
Sep 2002
Oeiras, Portugal

3·463 Posts
Default

I second VBCurtis opinion about GMP-ECM. A large amount of memory like the one you have available would be very useful searching for large factors of very small exponents.
You may find lots of info here: http://www.mersenneforum.org/showthread.php?t=20092
lycorn is offline   Reply With Quote
Old 2015-08-19, 13:18   #6
airsquirrels
 
airsquirrels's Avatar
 
"David"
Jul 2015
Ohio

20516 Posts
Default

Thanks for the input, I have some attachment to the Mersenne search but I would also like to do the most good possible with these resources.

I actually have a second system that's Dual 1.8Ghz 4-core CPUs but also has significant RAM resources, this weekend I will take a look at setting up some testing and see what makes sense. Multi-month jobs are not a problem, this is a dedicated number theory research cluster.
airsquirrels is offline   Reply With Quote
Old 2015-08-19, 16:23   #7
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT)

2·3·13·73 Posts
Default

Quote:
Originally Posted by airsquirrels View Post
Thanks for the input, I have some attachment to the Mersenne search but I would also like to do the most good possible with these resources.

I actually have a second system that's Dual 1.8Ghz 4-core CPUs but also has significant RAM resources, this weekend I will take a look at setting up some testing and see what makes sense. Multi-month jobs are not a problem, this is a dedicated number theory research cluster.
My suggestion then, would be to do some smaller jobs for nfs@home working out the best setup on your machines. It might be that it makes sense not to use all the cores on each cpu and use the rest for LL/PM1/ECM.
As a postprocessor you might be able to encourage the largest jobs to be factoring mersenne numbers. In fact currently they are sieving 2^1285-1.
henryzz is online now   Reply With Quote
Old 2015-08-19, 17:00   #8
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

32·1,009 Posts
Default

Quote:
Originally Posted by henryzz View Post
As a postprocessor you might be able to encourage the largest jobs to be factoring mersenne numbers. In fact currently they are sieving 2^1285-1.
This suggestion is misleading. Postprocessing for a gnfs-218 (2^1285-1) cannot (and will not) be done on a single machine (even with 64 cores).

There are smaller postprocessing jobs in the pipeline though for which this server can do some good.
Batalov is offline   Reply With Quote
Old 2015-08-19, 17:20   #9
henryzz
Just call me Henry
 
henryzz's Avatar
 
"David"
Sep 2007
Cambridge (GMT)

2×3×13×73 Posts
Default

Quote:
Originally Posted by Batalov View Post
This suggestion is misleading. Postprocessing for a gnfs-218 (2^1285-1) cannot (and will not) be done on a single machine (even with 64 cores).

There are smaller postprocessing jobs in the pipeline though for which this server can do some good.
I was under the impression that jobs like that just needed enough memory and would take many months on a pc like this.
What would be the timeframe/memory capacity needed for such a job(cpus similar to above)?
henryzz is online now   Reply With Quote
Old 2015-08-19, 19:13   #10
Batalov
 
Batalov's Avatar
 
"Serge"
Mar 2008
Phi(4,2^7658614+1)/2

100011011110012 Posts
Default

Something like the Lonestar cluster (I think Lonestar has by now been retired; there are other resources at XSEDE.)

Cf. https://eprint.iacr.org/2012/444.pdf (Section 5).
This job is only slightly larger. GNFS-218 is like a SNFS-335 (which is 1115 bits < 1285 so GNFS is clearly appropriate; M1061 was a 1061-bit job)
Batalov is offline   Reply With Quote
Old 2015-08-19, 20:30   #11
ATH
Einyen
 
ATH's Avatar
 
Dec 2003
Denmark

289010 Posts
Default

Off topic: 768 bits is still the record for biggest GNFS? and 1061 bits for SNFS?
ATH is offline   Reply With Quote
Reply

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Large Gaps >500,000 mart_r Prime Gap Searches 119 2017-08-21 12:48
48-bit large primes! jasonp Msieve 24 2010-06-01 19:14
a^n mod m (with large n) Romulas Math 3 2010-05-08 20:11
Is this a relatively large number? MavsFan Math 3 2003-12-12 02:23
New Server Hardware and price quotes, Funding the server Angular PrimeNet 32 2002-12-09 01:12

All times are UTC. The time now is 05:04.

Tue Aug 11 05:04:42 UTC 2020 up 25 days, 51 mins, 1 user, load averages: 3.47, 3.28, 3.07

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.