![]() |
|
|
#12 |
|
Oct 2002
Lost in the hills of Iowa
1110000002 Posts |
There's been ongoing talk for years about a Distributed.Net specific add-in card.
The problem that keeps cropping up is that such a card isn't cost-effective - by the time you can get it designed and working, the general purpose CPUs have gotten enough faster to blow such a dedicated card away. The only exception I've seen was the EFF's "Deep Crack" machine - but that took a LOT of money to develop in a timeframe to make it still usefull, and that was DES-specific - DES was a lot simpler and faster to process. I'm not saying it's not POSSIBLE - but I suspect a more general-purpose "buncha CPUS on a card" would be more practical, using (for example) a group of Transmeta Crusoes to keep the power consumption to a tolerable level. |
|
|
|
|
|
#13 |
|
Aug 2002
2·3·29 Posts |
The only problem for an 'add on' card is that it defeats the idea of DC in a sense, at least on the kind DC projects where we run the program when our CPUs are idle.
We run these programs, for example Prime95, to capture WASTED CPU POWER that would've been lost if nothing is being excuted on the CPU. Not to buy another addon card so that we can run Prime95 faster. Making such hardware cost a lot of money and at the end only so few people buy it it is just not worth the hassle. On the other hand for other kind of DC projects, those that are run by large corperations and governments, such as Distributed Rendering of movies etc etc, might benefit from developments of add on cards. Lastly if somebody will write a program that will run Prime95 on my GRAPHICS CARD when I am not playing any games, I will certainly run it :) |
|
|
|
|
|
#14 |
|
32×52×31 Posts |
I think the post, was meant more for people that already posses a high-end card, and want to put its idle time to use.... ah yes i would use it too.
Also a screensaver, or game that runs part-time prime95 would be excellent, and I believe would apeal to even the non-mathematically inclined. I have a great idea for a prime95 video game, where the player actually hones in on an exponent, and does some of the small work with zeroes and ones, which could just as easily be any other two opposing ideas. I have the objects of the game, saved as turbo-cad files. The game could dedicate CPU cycles about %50 to prime95, and %50 to itself. I am working on an LLR version now. Any suggestions ? ? ? Anyone ? ? ? Previous message: "We run these programs, for example Prime95, to capture WASTED CPU POWER that would've been lost if nothing is being excuted on the CPU. Not to buy another addon card so that we can run Prime95 faster. Making such hardware cost a lot of money and at the end only so few people buy it it is just not worth the hassle. On the other hand for other kind of DC projects, those that are run by large corperations and governments, such as Distributed Rendering of movies etc etc, might benefit from developments of add on cards. Lastly if somebody will write a program that will run Prime95 on my GRAPHICS CARD when I am not playing any games, I will certainly run it " |
|
|
|
#15 | |
|
Dec 2002
Frederick County, MD
2×5×37 Posts |
.
Quote:
:(
|
|
|
|
|
|
|
#16 |
|
Aug 2002
Portland, OR USA
2×137 Posts |
I did a 'system info' on my machine@work (figured they'd frown at me popping the hood), then tried google on the card type plus 'programming developers kit' etc. Not much luck until a link listed the chipset -- did a google on that and found out it was 'OpenGL compatible'. There are a bunch of sites on OpenGL.
What all this tells me is, if someone develops an LL/factoring app for one video card very carefully, then porting it to others will be tedious but not to painful. If not, they may have to redesign/rewrite it for each card. Realistically, we'll probably end up grouping 'similar' chipsets and writing an app for each group. On the question of sharing the video card with the rest of the machine -- when the cpu switches tasks, it performs a 'state save' -- stack dump, whatever. How much you wanna bet none of the os's, especially windoze, bothers to do that with the video buffers/registers? For one thing, it would take way too long. Does that limit us to an app that steals all the video resources, like a screensaver? Hmm, when I run an audio app, and then try to run another, it informs me that the resource is busy... Ok, our app flags the graphics engine as busy, (how?) and starts factoring. Later, another app tries to use it. Our app saves its progress and releases the engine. It then sits in the background sampling how busy the engine is -- waiting until it is 'free' before starting up again. This is getting more and more complicated. :( Bruce |
|
|
|
|
|
#17 |
|
Aug 2002
Portland, OR USA
2×137 Posts |
On the plus side, as I understand it, the display requests from most apps simply flow thru the display pipe and don't touch any of the special registers/buffers unless the app specifically contains calls to the graphics engine. Is this correct?
I have this image in my head ... A DC Farm like Prime Monster with 24 or 32 motherboards ... each motherboard with 8 video cards ... 8) 8) Bruce |
|
|
|
|
|
#18 |
|
Dec 2002
Frederick County, MD
2×5×37 Posts |
OK, I don't know if anybody else has looked, but nVidia has something called Cg, which means C for graphics. They tout it as "allows developers to create advanced visual effects for today's programmable GPUs from NVIDIA and other vendors." here a link: http://developer.nvidia.com/view.asp?PAGE=cg_main
Anyway, I haven't looked at it closely, and I certainly have zero experience when it come to graphics (though I do have experience in C); I'm basically posting this in case someone else knows anything about it. I'll post here again if I learn anything new. edit: As far as I can tell, one can use Cg to do all the normal mathematical opperations you can do in C, so I suppose someone could try to make a TF of an LL program to run on a GPU using Cg. As I said before, I have some experience in C, but I'm sure there are others out there who would be much better qualified to look into this. But for now, I'll keep studying Cg and GPU programing. |
|
|
|
|
|
#19 |
|
3×52×89 Posts |
No sarcasm! :D
I will post a follow up when I am finished the outline. It is not so disturbed, compared to many games out there. This one has purpose, meaning, order, chaos etc. But I am not a game programmer so the outline is as far as I can take it. :( ..... |
|
|
|
#20 |
|
Dec 2002
Frederick County, MD
2×5×37 Posts |
But I guess, TTn, that you're just talking about programming conventionally, and not using the GPU?
|
|
|
|
|
|
#21 |
|
Mar 2003
Braunschweig, Germany
3428 Posts |
Hello!
Even if i lack the necessary experience in GPU-programming and expertise in basic 3d-graphic operations, i have the gut feeling, that at least the some 'classical' mathematical problems could benefit from modern GPUs. Without detailing my thoughts in a technical sense (and that may well be _the_ point rendering the following thoughts useless) i would nonetheless drop some lines here: I think the problem or part of the problem to solve must be modeled in "GPU-space", that means e.g. modelling the problem using textures and vertices and then applying T&L, pixel shaders, vertex shaders and other GUP-specific goodies. I know that "encoding the problem and algorithms in GPU-space" is the key element to success. i guess it is also the most difficult to achieve for "real" problems. To achieve a speedup compared against CPU-processing the whole process of a) "encode problem in GPU-space", b) transfer data to Graphic-Card, c) process and d) measure results (getting framebuffer-data) would obviously have to be faster then just calculating the problem with the CPU. So iterative algorithms on the GPU-side using as many GPU-processing units as possible in parallel seem to offer the biggest rewards. Doing some research on the net i found that using GPU-hardware as a general purpose SIMD-Computer is already work in progress. For example: http://citeseer.nj.nec.com/peercy00interactive.html implements a system using the OpenGL-API. They use textures for local variable storage, framebuffer blending for arithmetic operations (add, sub, mul). They implement mathematical functions like sin or reciprocal using color or texture lookup tables with a framebuffer pixel-pixel copy. The stencil buffer is used for flow control. A vast source for information is also http://wwwx.cs.unc.edu/~harrism/gpgpu/index.shtml. I think i will have to read some of the papers presented there to get a better grip on the problem. Tau |
|
|
|
|
|
#22 |
|
Apr 2003
Berlin, Germany
16916 Posts |
That may be of interest for you:
"Using Modern Graphics Architectures for General-Purpose Computing: A Framework and Analysis" http://www.computer.org/proceedings/...8590306abs.htm They used the vertex shaders for doing some matrix stuff and other tests and their P4 (2GHz or so) was sometimes 20 times slower (!) than their GeForce 4. Here are some shader language intros that you can get some first impression about capabilities and what might not be possible: http://www.gamedev.net/reference/art...rticle1820.asp http://www.gamedev.net/columns/hardc...der1/page5.asp Regards, DDB[/list] |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Prime95 and graphics cards | keithschmidt | Information & Answers | 45 | 2016-09-10 10:08 |
| Modern C | Dubslow | Programming | 15 | 2016-01-12 09:13 |
| New Linux rootkit leverages graphics cards for stealth. | swl551 | Lounge | 0 | 2015-05-08 14:06 |
| Nvidia's next-generation graphics cards | ixfd64 | GPU Computing | 22 | 2014-11-15 04:25 |
| how do graphics cards work so fast? | ixfd64 | Hardware | 1 | 2004-06-02 03:01 |