Thread: GPU questions
View Single Post
Old 2012-05-08, 02:48   #2
Dubslow
Basketry That Evening!
 
Dubslow's Avatar
 
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88

160658 Posts
Default

Quote:
Originally Posted by c10ck3r View Post
Howdy!
So, I have a GTX 460 that is currently running mfaktc 0.18 (2 instances), which is being fed by a 3.2Ghz single-core Pent D 940 w/ 2G DDR2 RAM.
My first question is: for GPU72 exponents (currently 56M 71-72 bits), will there be a noticeable difference between 420 and 4620 classes?
My second question is: for GPU72 exponents (and similarly larger exponents), would P-1 be feasible to implement on a GPU, even if only on a partial scale. I guess I'll break this down into two parts: for B1/B2 at the normal bounds (or a new multiplier based on GPU performance), or for B1=B2 (ie no stage 2).
My third question (sorry, they just keep coming :P ) is whether, if (2) is feasible, is anyone currently trying/successfully implementing this?
Thanks!
Johann
1) No, and in fact running multiple instances on just one core probably won't help anything. You should definitely be using more classes though, while not a major difference, it is still enough of a difference to choose more classes over less.

2) I'm not entirely sure, but it would require similar FFT multiplication-of-numbers to the LL test, just the actual calculations are a bit different.

3) Again, not sure, but I don't think anybody's working on it. LaurV had a small experiment, but I don't think he's gotten anywhere with that.
Dubslow is offline   Reply With Quote