![]() |
Graphic Card usable for Prime?
nvidia wants to use graphic cards to calculate the phsical part in PC games.
When does Prime95 follow this idea and use the GPU for the math? Is it possible? Regards |
there are those physix cards being sold currently and they must have some wicked power because i saw things not even a 10000 dollar machine could do in a video game in gameplay.
|
This has been brought up before,
the issue is the precision/size of the numbers (data formats) available. The GPUs have 32 bit single precision floating point (earlier 16 bit integer) but 64 bit double precision is needed at least for the FFT routines used by the Lucas-Lehmer algorithm with Prime95. If the smaller range of numbers available with 32 bit floats can be used by a math algorithm then the GPU could be used. If the algorithm is enhanced by the parallel processing of the GPU then it could have very good performance. The physics calculations done on the nVidia GPUs in SLI could also be done by the AGEIA PhysX Processor ( PPU ) that is starting to ship. |
As I wrote in an earlier thread:
[quote]The potential use of graphics cards to do L-L testing or factoring has been discussed several times in the Hardware forum. (Search on "graphics" or "video" there.) The main reason video cards are not suitable for GIMPS work is that GIMPS calculations require high precision and so use double-precision floating-point arithmetic, but video cards operate with only single-precision floating-point numbers. Single-precision FP is all that video work requires, so it is unlikely that any future video cards will be able to perform double-precision FP, not matter how advanced their other capabilities or speed. Q: Why not use single-precision? A: It has to do with the technical details of FFT arithmetic, especially the need to guard against losing low-order result bits because of FP rounding/truncation. For more information, search the Math forum threads discussing this subject.[/quote] |
Has anybody looked into Trial Factoring on a GPU? It seems like the data needs of that task are a better match for the hardware.
|
Looks like the dedicated physics cards won't be suitable for LL work either.
According to: [url]http://personal.inet.fi/atk/kjh2348fs/ageia_physx.html[/url] The PPUs from Ageia are "optimized for 32-bit floating-point math" (and yeah the demos using the PPUs looks absolutely awesome) -- Craig |
Yes, 32-bit math only. Here is [URL="http://gamma.cs.unc.edu/GPUFFTW/"]a link[/URL] to the library distribution.
|
64 bit Floating Point Math on ATI R580 graphics cards
I recently saw a reference to [url]http://www.peakstreaminc.com[/url] which describes the performance increase that is possible using their software libraries and ATI R580 graphics cards. According to this web site they support 64 bit floating point math for high performance computing needs using C and/or C++. They are also offering a no cost evaluation program for Linux workstation users.
|
Something new under the sun?
[URL]http://ir.ati.com/phoenix.zhtml?c=105421&p=irol-newsArticle&ID=910519&highlight=[/URL] Or are we always facing with double precision iussues? |
How about latest nVidia [URL="http://www.nvidia.com/page/8800_tech_specs.html"]product[/URL]?
[quote][LIST][*]Full 128-bit floating point precision through the entire rendering pipeline[/LIST][/quote] |
What Nvidia means is actually the 4 component vector composed of 4 32bit floats. See as an example, how they handle the bit count:
[quote]128-bit floating point high dynamic-range (HDR) lighting with anti-aliasing * 32-bit per component floating point texture filtering and blending[/quote] |
| All times are UTC. The time now is 04:33. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.