mersenneforum.org

mersenneforum.org (https://www.mersenneforum.org/index.php)
-   GPU Computing (https://www.mersenneforum.org/forumdisplay.php?f=92)
-   -   mfaktc: a CUDA program for Mersenne prefactoring (https://www.mersenneforum.org/showthread.php?t=12827)

xilman 2011-06-06 17:47

[QUOTE=Xyzzy;263107]This morning we discovered that two of our eight workers had transferred all of their remaining worktodo.txt contents to their results.txt files and thus were idle.

:mike:[/QUOTE]Yeah, I get that all too often. The GPUs chew through work at such a frightening rate that even though I ask for 200 assignments at a time there's still a risk of running dry. :sad:

My other system, the one with a C1060, was upgraded to Fedora 15 a few days ago, which killed the CUDA installation. Also :sad:


Until it's fixed it will be reduced to running msieve.


Paul

Xyzzy 2011-06-06 18:03

We usually reserve 500 or more assignments per core.

What we mentioned above is that the client actually moved the worktodo.txt assignments directly to the results.txt file, without doing them. (The results.txt file looked like a worktodo.txt file.)

We are not sure if you are agreeing that the client moves stuff around which causes your queue to run dry or if you are just saying that your queue is running dry because the GPU is so fast. Or both?

:max:

xilman 2011-06-06 19:11

[QUOTE=Xyzzy;263117]What we mentioned above is that the client actually moved the worktodo.txt assignments directly to the results.txt file, without doing them. (The results.txt file looked like a worktodo.txt file.)

We are not sure if you are agreeing that the client moves stuff around which causes your queue to run dry or if you are just saying that your queue is running dry because the GPU is so fast. Or both?

:max:[/QUOTE]I misunderstood the point you made in your first quoted para. I've never seen that behaviour. My queue does indeed run dry too often because the GPU is so fast.

Xyzzy 2011-06-06 19:30

FWIW, Fish1 is investigating the possibility of a user error in this situation.

[SIZE=1]Snake1: They are going to blame me for this mess![/SIZE]

TheJudger 2011-06-06 21:35

XYZZY, please let me know if it was a layer 8 problem or not. :wink:

Oliver

Uncwilly 2011-06-06 22:40

[QUOTE=xilman;263115]Yeah, I get that all too often. The GPUs chew through work at such a frightening rate that even though I ask for 200 assignments at a time there's still a risk of running dry.[/QUOTE]</shame>Try running some of the expos in the 100M digit range up to 80 or 81 bits.<shame>

Christenson 2011-06-07 02:04

Just bump up your bit level...by 1 will keep you busy, by 10 will keep you busy all year!

xilman 2011-06-07 09:41

[QUOTE=Uncwilly;263136]</shame>Try running some of the expos in the 100M digit range up to 80 or 81 bits.<shame>[/QUOTE]I take what the server gives me. The organizers of the project are much more likely to know better than I what my resources should be doing.

Paul

Uncwilly 2011-06-07 12:41

[QUOTE=xilman;263160]I take what the server gives me. The organizers of the project are much more likely to know better than I what my resources should be doing.[/QUOTE]The PrimeNet settings don't take your GPU hardware into consideration. If you want to work on the expos that PrimeNet is handing out, then I would suggest that you do the following:
a) get an allotment from the server,
b) find out how far they are normally to be taken ([url]http://mersenne-aries.sili.net/factorbits.php[/url])
c) add 2 to that number,
d) replace, in your worktodo, the stop bit is that you were handed out to the new number.

That is effectively what George says is ok for GPU's and should make you worktodo last much longer.

Christenson 2011-06-07 12:46

[QUOTE=xilman;263160]I take what the server gives me. The organizers of the project are much more likely to know better than I what my resources should be doing.

Paul[/QUOTE]

I thought, to some degree, you were a project organizer....
However, IMO, the project is advanced by maximizing the number of exponents eliminated for minimum effort....even if there is some disagreement on how best to measure that effort.

I'm running about 1 in 30 successful TFs right now, taking perhaps an hour apeice on the wall clock. So for my one or two day's compute effort, I eliminate approximately one exponent. This compares quite favorably with my P-1 efforts, that take 50-60GHz days to eliminate an exponent, and significantly more than a day to get those 60GHz days of work in on a CPU.

We are working, slowly, on the automatic interactions....

xilman 2011-06-07 15:15

[QUOTE=Christenson;263172]ISo for my one or two day's compute effort, I eliminate approximately one exponent. This compares quite favorably with my P-1 efforts, that take 50-60GHz days to eliminate an exponent, and significantly more than a day to get those 60GHz days of work in on a CPU.[/QUOTE]
That's roughly what I'm doing, though the rate is probably closer to two factors a day.
[QUOTE=Christenson;263172]
We are working, slowly, on the automatic interactions....[/QUOTE]Good! The sooner it arrives the better. I'd much rather prefer a fire-and-forget solution than have to remember to do all the baby sitting. If I also have to faff around editing input files to compensate for a present inadequacy of the task allocation strategy then it's quite likely that my GIMPS contribution will fall to zero. Of course, if uncwilly would rather have no contribution if favour of his desired pattern of contribution ...


Paul


All times are UTC. The time now is 23:12.

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.