mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > Hardware

Reply
 
Thread Tools
Old 2003-03-24, 16:14   #12
QuintLeo
 
QuintLeo's Avatar
 
Oct 2002
Lost in the hills of Iowa

1110000002 Posts
Default

There's been ongoing talk for years about a Distributed.Net specific add-in card.

The problem that keeps cropping up is that such a card isn't cost-effective - by the time you can get it designed and working, the general purpose CPUs have gotten enough faster to blow such a dedicated card away.

The only exception I've seen was the EFF's "Deep Crack" machine - but that took a LOT of money to develop in a timeframe to make it still usefull, and that was DES-specific - DES was a lot simpler and faster to process.


I'm not saying it's not POSSIBLE - but I suspect a more general-purpose "buncha CPUS on a card" would be more practical, using (for example) a group of Transmeta Crusoes to keep the power consumption to a tolerable level.
QuintLeo is offline   Reply With Quote
Old 2003-03-24, 16:57   #13
xtreme2k
 
xtreme2k's Avatar
 
Aug 2002

2·3·29 Posts
Default

The only problem for an 'add on' card is that it defeats the idea of DC in a sense, at least on the kind DC projects where we run the program when our CPUs are idle.

We run these programs, for example Prime95, to capture WASTED CPU POWER that would've been lost if nothing is being excuted on the CPU. Not to buy another addon card so that we can run Prime95 faster.

Making such hardware cost a lot of money and at the end only so few people buy it it is just not worth the hassle.

On the other hand for other kind of DC projects, those that are run by large corperations and governments, such as Distributed Rendering of movies etc etc, might benefit from developments of add on cards.

Lastly if somebody will write a program that will run Prime95 on my GRAPHICS CARD when I am not playing any games, I will certainly run it :)
xtreme2k is offline   Reply With Quote
Old 2003-03-27, 06:58   #14
TTn
 

32×52×31 Posts
Default

I think the post, was meant more for people that already posses a high-end card, and want to put its idle time to use.... ah yes i would use it too.

Also a screensaver, or game that runs part-time prime95 would be excellent, and I believe would apeal to even the non-mathematically inclined.

I have a great idea for a prime95 video game, where the player actually hones in on an exponent, and does some of the small work with zeroes and ones, which could just as easily be any other two opposing ideas.
I have the objects of the game, saved as turbo-cad files.
The game could dedicate CPU cycles about %50 to prime95, and %50 to itself. I am working on an LLR version now.

Any suggestions ? ? ?
Anyone ? ? ?


Previous message:
"We run these programs, for example Prime95, to capture WASTED CPU POWER that would've been lost if nothing is being excuted on the CPU. Not to buy another addon card so that we can run Prime95 faster.
Making such hardware cost a lot of money and at the end only so few people buy it it is just not worth the hassle.
On the other hand for other kind of DC projects, those that are run by large corperations and governments, such as Distributed Rendering of movies etc etc, might benefit from developments of add on cards.
Lastly if somebody will write a program that will run Prime95 on my GRAPHICS CARD when I am not playing any games, I will certainly run it "
  Reply With Quote
Old 2003-03-27, 14:36   #15
eepiccolo
 
eepiccolo's Avatar
 
Dec 2002
Frederick County, MD

2×5×37 Posts
Default

.
Quote:
Originally Posted by TTn
I have a great idea for a prime95 video game, where the player actually hones in on an exponent, and does some of the small work with zeroes and ones, which could just as easily be any other two opposing ideas.
I have the objects of the game, saved as turbo-cad files.
The game could dedicate CPU cycles about %50 to prime95, and %50 to itself. I am working on an LLR version now.
I think I'm detecting some sarcasm here. Though if you're not being sarcastic, I think I would enjoy playing such a game. I think I'm rather disturbed. :(
eepiccolo is offline   Reply With Quote
Old 2003-03-27, 19:43   #16
Maybeso
 
Maybeso's Avatar
 
Aug 2002
Portland, OR USA

2×137 Posts
Default

I did a 'system info' on my machine@work (figured they'd frown at me popping the hood), then tried google on the card type plus 'programming developers kit' etc. Not much luck until a link listed the chipset -- did a google on that and found out it was 'OpenGL compatible'. There are a bunch of sites on OpenGL.

What all this tells me is, if someone develops an LL/factoring app for one video card very carefully, then porting it to others will be tedious but not to painful. If not, they may have to redesign/rewrite it for each card.

Realistically, we'll probably end up grouping 'similar' chipsets and writing an app for each group.

On the question of sharing the video card with the rest of the machine -- when the cpu switches tasks, it performs a 'state save' -- stack dump, whatever. How much you wanna bet none of the os's, especially windoze, bothers to do that with the video buffers/registers? For one thing, it would take way too long. Does that limit us to an app that steals all the video resources, like a screensaver?

Hmm, when I run an audio app, and then try to run another, it informs me that the resource is busy...
Ok, our app flags the graphics engine as busy, (how?) and starts factoring. Later, another app tries to use it. Our app saves its progress and releases the engine. It then sits in the background sampling how busy the engine is -- waiting until it is 'free' before starting up again.

This is getting more and more complicated. :(

Bruce
Maybeso is offline   Reply With Quote
Old 2003-03-27, 20:19   #17
Maybeso
 
Maybeso's Avatar
 
Aug 2002
Portland, OR USA

2×137 Posts
Default

On the plus side, as I understand it, the display requests from most apps simply flow thru the display pipe and don't touch any of the special registers/buffers unless the app specifically contains calls to the graphics engine. Is this correct?


I have this image in my head ...

A DC Farm like Prime Monster with 24 or 32 motherboards ...

each motherboard with 8 video cards ... 8) 8)

Bruce
Maybeso is offline   Reply With Quote
Old 2003-03-27, 20:43   #18
eepiccolo
 
eepiccolo's Avatar
 
Dec 2002
Frederick County, MD

2×5×37 Posts
Default

OK, I don't know if anybody else has looked, but nVidia has something called Cg, which means C for graphics. They tout it as "allows developers to create advanced visual effects for today's programmable GPUs from NVIDIA and other vendors." here a link: http://developer.nvidia.com/view.asp?PAGE=cg_main
Anyway, I haven't looked at it closely, and I certainly have zero experience when it come to graphics (though I do have experience in C); I'm basically posting this in case someone else knows anything about it.
I'll post here again if I learn anything new.

edit: As far as I can tell, one can use Cg to do all the normal mathematical opperations you can do in C, so I suppose someone could try to make a TF of an LL program to run on a GPU using Cg. As I said before, I have some experience in C, but I'm sure there are others out there who would be much better qualified to look into this. But for now, I'll keep studying Cg and GPU programing.
eepiccolo is offline   Reply With Quote
Old 2003-03-28, 10:22   #19
TTn
 

3×52×89 Posts
Default

No sarcasm! :D

I will post a follow up when I am finished the outline.


It is not so disturbed, compared to many games out there.
This one has purpose, meaning, order, chaos etc.
But I am not a game programmer so the outline is as far as I can take it. :( .....
  Reply With Quote
Old 2003-03-28, 12:32   #20
eepiccolo
 
eepiccolo's Avatar
 
Dec 2002
Frederick County, MD

2×5×37 Posts
Default

But I guess, TTn, that you're just talking about programming conventionally, and not using the GPU?
eepiccolo is offline   Reply With Quote
Old 2003-03-31, 15:21   #21
TauCeti
 
TauCeti's Avatar
 
Mar 2003
Braunschweig, Germany

3428 Posts
Default Some comments from an amateur

Hello!

Even if i lack the necessary experience in GPU-programming and expertise in basic 3d-graphic operations, i have the gut feeling, that at least the some 'classical' mathematical problems could benefit from modern GPUs.

Without detailing my thoughts in a technical sense (and that may well be _the_ point rendering the following thoughts useless) i would nonetheless drop some lines here:

I think the problem or part of the problem to solve must be modeled in "GPU-space", that means e.g. modelling the problem using textures and vertices and then applying T&L, pixel shaders, vertex shaders and other GUP-specific goodies.

I know that "encoding the problem and algorithms in GPU-space" is the key element to success. i guess it is also the most difficult to achieve for "real" problems.

To achieve a speedup compared against CPU-processing the whole process of a) "encode problem in GPU-space", b) transfer data to Graphic-Card, c) process and d) measure results (getting framebuffer-data) would obviously have to be faster then just calculating the problem with the CPU. So iterative algorithms on the GPU-side using as many GPU-processing units as possible in parallel seem to offer the biggest rewards.

Doing some research on the net i found that using GPU-hardware as a general purpose SIMD-Computer is already work in progress.

For example: http://citeseer.nj.nec.com/peercy00interactive.html implements a system using the OpenGL-API. They use textures for local variable storage, framebuffer blending for arithmetic operations (add, sub, mul). They implement mathematical functions like sin or reciprocal using color or texture lookup tables with a framebuffer pixel-pixel copy. The stencil buffer is used for flow control.

A vast source for information is also http://wwwx.cs.unc.edu/~harrism/gpgpu/index.shtml.

I think i will have to read some of the papers presented there to get a better grip on the problem.

Tau
TauCeti is offline   Reply With Quote
Old 2003-04-25, 13:27   #22
Dresdenboy
 
Dresdenboy's Avatar
 
Apr 2003
Berlin, Germany

16916 Posts
Default

That may be of interest for you:
"Using Modern Graphics Architectures for General-Purpose Computing: A Framework and Analysis"
http://www.computer.org/proceedings/...8590306abs.htm

They used the vertex shaders for doing some matrix stuff and other tests and their P4 (2GHz or so) was sometimes 20 times slower (!) than their GeForce 4.

Here are some shader language intros that you can get some first impression about capabilities and what might not be possible:
http://www.gamedev.net/reference/art...rticle1820.asp
http://www.gamedev.net/columns/hardc...der1/page5.asp

Regards,
DDB[/list]
Dresdenboy is offline   Reply With Quote
Reply



Similar Threads
Thread Thread Starter Forum Replies Last Post
Prime95 and graphics cards keithschmidt Information & Answers 45 2016-09-10 10:08
Modern C Dubslow Programming 15 2016-01-12 09:13
New Linux rootkit leverages graphics cards for stealth. swl551 Lounge 0 2015-05-08 14:06
Nvidia's next-generation graphics cards ixfd64 GPU Computing 22 2014-11-15 04:25
how do graphics cards work so fast? ixfd64 Hardware 1 2004-06-02 03:01

All times are UTC. The time now is 09:59.


Sat Jul 17 09:59:19 UTC 2021 up 50 days, 7:46, 1 user, load averages: 1.36, 1.27, 1.25

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.