![]() |
[QUOTE=Chuck;275705]I just installed the new driver; I noted a few timings that were on the screen before shutting down both instances of mfaktc.
After installing the new driver I waited for a couple of TFs to complete. I would say there is either no difference or maybe the new driver is taking a littler bit longer.[/QUOTE] [url=http://www.youtube.com/watch?v=eeqUUNHwAl8]Baby Driver[/url] |
could someone give me a hand getting mfaktc to run on my second GPU? I currently have 2 gtx 285 and one is going to waste not being utilized.
|
[QUOTE=fruitflavor;277087]could someone give me a hand getting mfaktc to run on my second GPU? I currently have 2 gtx 285 and one is going to waste not being utilized.[/QUOTE]
Can you add more detail? What's the command you are using? For code to run on the 2nd gpu, you need to add the "-d 1" switch on the command line. -- Craig |
They're counted as 0 and 1 then?
|
[QUOTE=Dubslow;277155]They're counted as 0 and 1 then?[/QUOTE]
Yep. I have 2x dual GPU machines. I vaguely recall you need to make sure that SLI is _OFF_ -- Craig |
[QUOTE=fruitflavor;277087]could someone give me a hand getting mfaktc to run on my second GPU? I currently have 2 gtx 285 and one is going to waste not being utilized.[/QUOTE]
Check the FAQ section in README.txt. Oliver |
[QUOTE=Dubslow;277155]They're counted as 0 and 1 then?[/QUOTE]
Yepp, mfaktc numbers the GPUs in same way as nvidia does. Oliver |
Hello,
it has been a relative long time since the last mfaktc release release. So here is a little teaser for mfaktc 0.18. So far there are more than usual minor updates but the performance numbers are pretty much the same compared to mfaktc 0.17. Stock 285 GTX, barrett79 kernel, raw GPU speed (without sieving)[CODE] | CUDA 3.2 | CUDA 4.1-RC1 mfaktc 0.17 | 74.19M/s | 74.16M/s mfaktc 0.18-pre9 | 74.46M/s | 74.46M/s[/CODE] Factory overclocked GTX 560Ti (1701MHz), barrett79 kernel, raw GPU speed (without sieving)[CODE] | CUDA 3.2 | CUDA 4.1-RC1 mfaktc 0.17 | 260.94M/s | 261.93M/s mfaktc 0.18-pre9 | 260.80M/s | 258.97M/s[/CODE] Stock GTX 470, barrett79 kernel, raw GPU speed (without sieving) This is my development system, so I have more detailed benchmarks here[CODE] | CUDA 3.2 | CUDA 4.0 | CUDA 4.1-RC1 mfaktc 0.17 | 305.25M/s | 319.32M/s | 322.63M/s mfaktc 0.18-pre8 | 308.28M/s | 319.53M/s | 321.34M/s mfaktc 0.18-pre9 | 312.18M/s | 323.28M/s | 335.56M/s[/CODE] mfaktc 0.18-pre9 has some CUDA >= 4.1 specific code. CUDA 4.1 has new instructions for Fermi class GPUs (multiply-add [B]with carry[/B]). Of course it is still compileable with older CUDA releases and for CC 1.x GPUs! I have to check why it is slower on GTX 560 Ti... So for the mfaktc 0.18 release[LIST][*]I want to rework the barrett92 kernel (CUDA 4.1 optimizations)[*]I want to wait for official CUDA 4.1 release[*]ask Eric which of his new code should be included[/LIST] Eric works on the automated primenet interaction. The current code does [B]not[/B] contain any code to interact with the primenet server but e.g. the parsing of worktodo has been rewritten (taking care of the assignmend ID, ...) Oliver |
Any "cosmetics" on the horizon? Like for example the output file name in the ini file (now results.txt). For the people running many copies of mfaktc, to keep all the result in the same file, and no need to walk from a folder to another, to report them... And also mark somehow in the table that a factor is found (for example adding an asterisk after the class number, or replacing the "|" tabular character with "#" or "*", so if I missed a factor on the screen I can see, and scroll up for it? (these were just some dummy ideas).
|
oh, I didn't mention: benchmarks were for M66362159 from 2[SUP]69[/SUP] to 2[SUP]70[/SUP].
LaurV: yes, some cosmetics included, but nothing you've mentioned. You could try "PrintMode=1" in mfaktc.ini for your "missed factors" issue. Oliver |
A very desired update, and a offer of service!
[B]TheJudger[/B]: Thank you for all the work you've done too make us able to crunch for GIMPS using GPU. I almost completely exited GIMPS, as a protestation of not having the capacity of crunching with GPU, but now i plan to configure my cluster to restart. But i have 1 big problem with the actual version, as windows really su**s, and don't have job control commands, i can't unleash my main computer without blocking me to do anything else like playing Skyrim. So i would really, really, appreciate if you can add a simple control to be able to pause/restart the app, to replace ctrl+z fg combination of the great Bash. This way, when i wanna play, i can pause till i stop, restarting the crunching after. So this was what i hope you'll take the time to embed, and i'm pretty sure i'm not alone in this situation. If you have a hints that i must know that will do the job, tell it to me please. I think about making a small Python GUI launcher, that will have the ability to pause restart, but i need you to at least add a signal event catcher, or any control char catcher, that will catch ctrl-d and leave ctrl-c to interrupt, and make it do the 2 job simply alternating between state. Use a common combination, that will work with win&linux, even if linux don't need it. So if you want, i can make a Python/Tk GUI to control the app, even controlling more than one app at the same time, starting 1 per core, whatever. Leave me a message if you're interested!! If you make me the pause restart, i'll make you the GUI/Launcher!! Deal?
|
| All times are UTC. The time now is 23:15. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.