![]() |
That is single instance. My CPU is (almost) almost able to keep up with it. The avg. rate has been consistent for the last month completely independent of assignment. GPU-Z, when I was monitoring this in Windows, reported 80% load at 850 MHz, with 180 M/s, and a runtime of 3H17M (assuming I'm not using the comp, i.e. max efficiency) for Factor=N/A,50222647,69,72 (run as one assignment, not with separated bit levels, which from what I've heard does change mfaktc's efficiency).
|
It's [url]http://www.newegg.com/Product/Product.aspx?Item=N82E16814127519[/url]
Hopefully that helps. |
[QUOTE=TheJudger;281069]All you have to do is wait for mfaktc 0.18 (which depends mainly on the public release of CUDA 4.1)[/QUOTE]
CUDA4.1 RC2 appears to be available! [URL]http://developer.nvidia.com/cuda-toolkit-41[/URL] |
[QUOTE=James Heinrich;281113]Perfect, thanks. The above is the 4 pieces of info I need.
This is exactly why I need more data points: :smile: 8800GT: 13.90 GFLOPS per GHz-day/day GTX 460: 8.18 GFLOPS per GHz-day/day [b]edit:[/b] Hmm, [i]kladner[/i] -- which GPU is your GTX 460 using? GF104 or GF114? (if you're not sure, something like [url=http://www.techpowerup.com/downloads/SysInfo/GPU-Z/]GPU-Z[/url] will tell you).[/QUOTE] I ran one instance for each of my graphics cards (in xFire) on my i5-2500k oc'ed to 4.3GHz. I just about maxed out my CPU and my GPUs with sieve primes at 25000. ASUS HD 6950 DirectCUII (810MHz) Factor=N/A,52101913,69,70 Usage between 85 and 90%. Usually around 87%. no factor for M52101913 from 2^69 to 2^70 [mfakto 0.09-Win mfakto_cl_71] tf(): total time spent: 30m 18.750s and ASUS HD 6950 DirectCUII (810MHz) Factor=N/A,52123333,69,70 Usage between 87 and 91%. Usually around 90%. no factor for M52123333 from 2^69 to 2^70 [mfakto 0.09-Win mfakto_cl_71] tf(): total time spent: 29m 36.217s |
[QUOTE=James Heinrich;281113]Perfect, thanks. The above is the 4 pieces of info I need.
This is exactly why I need more data points: :smile: 8800GT: 13.90 GFLOPS per GHz-day/day GTX 460: 8.18 GFLOPS per GHz-day/day [b]edit:[/b] Hmm, [i]kladner[/i] -- which GPU is your GTX 460 using? GF104 or GF114? (if you're not sure, something like [url=http://www.techpowerup.com/downloads/SysInfo/GPU-Z/]GPU-Z[/url] will tell you).[/QUOTE] Wouldn't something like sieveprimes be important too? I mean, I get just about the exact same numbers when I run sieve primes = 10000, but my time increases reasonably significantly. PS - I feel this should be in a separate benchmark thread. I feel like I am threadjacking. |
[QUOTE=KyleAskine;281162]Wouldn't something like sieveprimes be important too? I mean, I get just about the exact same numbers when I run sieve primes = 10000, but my time increases reasonably significantly.
PS - I feel this should be in a separate benchmark thread. I feel like I am threadjacking.[/QUOTE] Yes, SievePrimes certainly affects running time, and even if it didn't, it affects how much work is done on the GPU, rather than the CPU. Without it, you'd likely need to include the CPU data too... |
GTX 580 datapoint
EVGA Black Ops GTX 580 factory OC 797 MHz
Factor=n/a,49938787,71,72 Usage: 44% tf(): total time spent: 1h 58m 2.857s |
Oh yes @TheJudger: Can you also adjust the parser so that it can read Windows and Linux worktodos? I don't know about the opposite case, but the Linux version is incapable of reading the CR/LF of the Windows text files.
Thanks |
ckp fast vs regular
Can a ckp file generated by the regular version of mfaktc be used to continue processing using the fast version of mfaktc?
|
[QUOTE=James Heinrich;281132]a [B]single instance[/B]: GPU, assignment, GPU usage, runtime. If I'm missing any datum it's not much use to me.[/QUOTE]
NVIDIA Quadro FX 880M (GT216) @ 550MHz ~98% GPU load no factor for M47677891 from 2^69 to 2^70 [mfaktc 0.18-pre7 71bit_mul24] tf(): total time spent: 4h 54m 38.525s SievePrimes @ 200k, ~13.4M/s, CPU wait 32% |
[QUOTE=James Heinrich;281105]I've thrown together a rough chart of CUDA GPU performance comparison:
[url]http://mersenne-aries.sili.net/mfaktc.php[/url] It is not yet properly calibrated. It currently translates GFLOPS (from Wikipedia) into GHz-days/day based on timing of a single test on my 8800GT. It does not (yet) take into account performance differences of different mfaktc cores etc. But I need some more data to fine-tune it: Please send me some timing info for a [i]single instance[/i] of mfaktc, including assignment (exponent, from/to bits), GPU model, time to complete the assignment, and GPU usage for that single instance.[/QUOTE] [QUOTE=James Heinrich;281113]Perfect, thanks. The above is the 4 pieces of info I need. This is exactly why I need more data points: :smile: 8800GT: 13.90 GFLOPS per GHz-day/day GTX 460: 8.18 GFLOPS per GHz-day/day [b]edit:[/b] Hmm, [i]kladner[/i] -- which GPU is your GTX 460 using? GF104 or GF114? (if you're not sure, something like [url=http://www.techpowerup.com/downloads/SysInfo/GPU-Z/]GPU-Z[/url] will tell you).[/QUOTE] Well, this might be not so easy...[LIST][*]my GTX 470 (1089 GFLOPS) is [B]4-5 times faster[/B] than my GTX 275 (1011 GFLOPS) for current assignments[LIST][*]compute capability 1.0 (G80 chip): wont work[*]compute capability 1.1-1.3: same speed[*]compute capability 2.0: currently best GFLOPS/mfaktc performance[*]compute capability 2.1: ~20-35% slower than 2.0 for same GFLOPS[/LIST][*]single instance of mfaktc will measure [B]CPU[/B] performance, not GPU performance for the highend GPUs[*]you can remove all G80 GPUs from your list: won't work with mfaktc[/LIST] Oliver |
| All times are UTC. The time now is 23:15. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.