mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   The P-1 factoring CUDA program (https://www.mersenneforum.org/showthread.php?t=17835)

TheMawn 2013-08-22 00:01

Does the P-1 Cuda program use double precision? Would a GTX 660Ti be well-suited for P-1 or should I stick to TF. I'd like to up my participation in P-1 matters but not enough to lose too much productivity...

kladner 2013-08-22 00:31

[QUOTE=TheMawn;350400]Does the P-1 Cuda program use double precision? Would a GTX 660Ti be well-suited for P-1 or should I stick to TF. I'd like to up my participation in P-1 matters but not enough to lose too much productivity...[/QUOTE]

The program is derived from CUDALucas, and yes, it relies on DP.

660ti is 14th in [URL="http://www.mersenne.ca/mfaktc.php?sort=ghdpd&noA=1"]TF performance[/URL] among nVidia chips.
660ti is 15th in [URL="http://www.mersenne.ca/cudalucas.php"]LL performance [/URL]according to mersenne.ca

Since CuLu and PM1 are cousins, performance might be similar, but owftheevil or frmky etc. would have to comment on comparison between the programs.

frmky 2013-08-22 01:19

[QUOTE=TheMawn;350400]Does the P-1 Cuda program use double precision? Would a GTX 660Ti be well-suited for P-1 or should I stick to TF. I'd like to up my participation in P-1 matters but not enough to lose too much productivity...[/QUOTE]

Yes, it relies on the same cudaFFT library as cudaLucas. Productivity has many definitions. If you are concerned about GHz-days/day, then stick to TF.

kracker 2013-08-22 01:21

Also on James's chart, something's not right, the 660 ti and 670 have the same output (237.7 TF)

EDIT: Expect the performance of CuLu and P-1 to be around a tenth or less of TF.

EDIT2: Blame nVidia for reducing DP on Kepler and overall all compute!

TheMawn 2013-08-22 01:23

Wow. I didn't know about that page. Thanks for that.

That's all I need to know. P-1 progress would be okay but I'd be losing more than 90% of my productivity in GHz-Days by switching away from TF.

chalsall 2013-08-22 01:36

[QUOTE=TheMawn;350408]That's all I need to know. P-1 progress would be okay but I'd be losing more than 90% of my productivity in GHz-Days by switching away from TF.[/QUOTE]

And, to put on the table, the overall effort needs a lot more TFing to keep up with the P-1'ing (we're still comfortably ahead of the LL'ing).

(Carl: My offer for a really good dinner here in Barbados stands (as does my offer of my virtual first born).)

TheMawn 2013-08-22 01:47

[QUOTE=kracker;350407]EDIT2: Blame nVidia for reducing DP on Kepler and overall all compute![/QUOTE]

Don't you mean blame Nvidia for making a GPU that's better at playing video games? Hah!

I've heard of people complaining a $4000 Quadro or some such had terrible video game performance for a $200 card and that for $4000 they should be getting a LOT LOT LOT LOT more. These people are just idiots of course but if Nvidia did make the CuLu God-GPU specially for us, it would certainly be bad at video games and would piss of the dumb section of the overwhelmingly huge gamer portion of their market.

We're just along for the ride, in the end...

James Heinrich 2013-08-22 02:08

[QUOTE=kracker;350407]Also on James's chart, something's not right, the 660 ti and 670 have the same output (237.7 TF)[/QUOTE]No, that's right. There's very little difference between the [url=http://en.wikipedia.org/wiki/Comparison_of_Nvidia_graphics_processing_units#GeForce_600_Series]GTX 660 Ti and GTX 670[/url], just a difference (24 vs 32) in the number of [url=http://en.wikipedia.org/wiki/Render_output_unit]ROPs[/url]. Same theoretical SP GFLOPS (2459.52).

kracker 2013-08-22 02:16

[QUOTE=TheMawn;350412]Don't you mean blame Nvidia for making a GPU that's better at playing video games? Hah!

I've heard of people complaining a $4000 Quadro or some such had terrible video game performance for a $200 card and that for $4000 they should be getting a LOT LOT LOT LOT more. These people are just idiots of course but if Nvidia did make the CuLu God-GPU specially for us, it would certainly be bad at video games and would piss of the dumb section of the overwhelmingly huge gamer portion of their market.[/QUOTE]
Well, yeah the huge majority use their GPU's for gaming, and I understand that. The one at good and bad at another isn't necessarily true though, look at AMD's GCN(example), they are tied with Nvidia's 600's on gaming.

[quote]
We're just along for the ride, in the end...[/quote]Yes indeed. I don't think anyone can disagree with that...

Manpowre 2013-08-22 08:13

[QUOTE=TheMawn;350412]I've heard of people complaining a $4000 Quadro or some such had terrible video game performance for a $200 card and that for $4000 they should be getting a LOT LOT LOT LOT more. These people are just idiots of course but if Nvidia did make the CuLu God-GPU specially for us, it would certainly be bad at video games and would piss of the dumb section of the overwhelmingly huge gamer portion of their market.
[/QUOTE]

Well, I work in a company using the Quadro series for everything we do. first of all, Quadro series are not meant for Cuda, even they can be used for that. They have registered memory, and have been downclocked to ensure every pixel is rendered without mistake.

You dont care about a pixel error when playing battlefield at 100 frames per second.

Byt you do care about pixel error when rendering out a scene meant for high level production for TV, movies, or using the cards for graphics live at Television.. tickers, lower thirds etc.. then that cannot happen.

that is what the Quadro boards are for. simply a professional platform with a API to hookup to applicaitons in a different way. Not possible with Geforce drivers. Also Quadro boards have better support from Nvidia both on software siden and hardware side..

so you do not pay for the board, you pay for the services, API, development that the card gives you and it gives you 100% quarantee with no pixel issues.

The Cuda card Nvidia has for enthusiasts is called Nvidia Titan.. Its great..
but the design with memory on back side is terrible. so you have to downclock the card to ensure the memory on backside doesnt run too hot.

owftheevil 2013-08-23 00:57

1 Attachment(s)
This has the mentioned bug fixes. Again compiled with cuda toolkit 5.0.

I've been messing with building cudalucas on windows. It builds and runs with correct results and decent speed, but with some oddities. Is it normal on windows for the polite option needing to be set to a very low positive value so that the gui is usable?


All times are UTC. The time now is 23:19.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.