View Single Post
Old 2011-09-29, 04:41   #4
NBtarheel_33
 
NBtarheel_33's Avatar
 
"Nathan"
Jul 2008
Maryland, USA

5×223 Posts
Talking Thanks for the help

I tried it out last night, but on a Windows machine - it's easier (and quicker, which matters when you're renting the machine by the hour!) to set things up. I played with a 2x Quad-Core 2.93 GHz Nehalem with 2 NVIDIA Fermi GPUs.

On the GPU, I was getting through 65-bit assignments in the 292M range in about 40-45 seconds! They take 12-13 minutes on my 2007 Core2 Duo, and 18-20 minutes on my 2006 Pentium 4. I ran one instance of mfaktc on each GPU, and I also played around with P-1 on the eight Nehalem cores (with hyperthreading, so you're actually getting 16 threads). Stage 1 on a 50M exponent, using all eight cores/16 threads, looks to need about 3-4 hours! The system had 23GB of RAM, so Stage 2 of P-1 would be interesting. There are other non-GPU systems available with as much as 68GB of RAM - I wonder what would happen if I gave all of that to a single P-1... Hmm...

Didn't try CUDALucas yet. It's probably not economically feasible at $2+ per hour to try to run an entire LL.

The system cost $2.94 per hour, so for two TFs every 45 seconds, that's a cost of 1.84 cents per TF, with just the GPUs running. So, say it's 1.5 cents per TF with the CPU and GPU - that means it would cost around $345 to clear 292M-293M. We should probably add to the "You're Addicted to GIMPS When..." thread - "You know you're addicted to GIMPS when you rent high-performance computing clusters to process your assignments" ... LOL.
NBtarheel_33 is offline   Reply With Quote