![]() |
[QUOTE=TObject;305964]Also try GIF; it beats JPEG when there are[B]n't[/B] a lot of continuous tones in the picture.[/QUOTE]
Fixed that for you. Actually, PNG is better still in this case... |
[QUOTE=CRGreathouse;305975]Fixed that for you. Actually, PNG is better still in this case...[/QUOTE]
PNG has excellent quality, but it tends to get big. |
I run two instances of mfaktc on a GTX 560 with four workers of mprime also running (on a Core i5 2500K). Then it runs with SievePrimes==5000. When I stop mprime, SievePrimes slowly increases to about 25000 until I start mprime again. (One instance of mfaktc shown below.)
[CODE] class | candidates | time | ETA | avg. rate | SievePrimes | CPU wait 96/4620 | 721.16M | 6.546s | 1h42m | 110.17M/s | 5000 | 0.91% 101/4620 | 721.16M | 6.542s | 1h42m | 110.24M/s | 5000 | 0.93% 105/4620 | 721.16M | 6.578s | 1h42m | 109.63M/s | 5000 | 0.92% 116/4620 | 721.16M | 6.466s | 1h40m | 111.53M/s | 5000 | 0.80% 117/4620 | 721.16M | 6.557s | 1h42m | 109.98M/s | 5000 | 0.98% 129/4620 | 721.16M | 6.460s | 1h40m | 111.63M/s | 5000 | 9.90% 137/4620 | 712.90M | 6.326s | 1h38m | 112.69M/s | 5625 | 32.87% 140/4620 | 704.64M | 6.217s | 1h36m | 113.34M/s | 6328 | 31.02% 141/4620 | 696.39M | 6.290s | 1h37m | 110.71M/s | 7119 | 31.54% 144/4620 | 688.13M | 6.155s | 1h35m | 111.80M/s | 8008 | 28.21% 149/4620 | 680.79M | 6.117s | 1h34m | 111.29M/s | 9009 | 26.61% 152/4620 | 672.53M | 5.984s | 1h32m | 112.39M/s | 10135 | 24.15% 156/4620 | 665.19M | 5.870s | 1h30m | 113.32M/s | 11401 | 21.52% 161/4620 | 657.85M | 5.831s | 1h30m | 112.82M/s | 12826 | 19.00% 165/4620 | 650.51M | 5.730s | 1h28m | 113.53M/s | 14429 | 15.82% 176/4620 | 644.09M | 5.683s | 1h27m | 113.34M/s | 16232 | 13.40% 177/4620 | 636.75M | 5.632s | 1h26m | 113.06M/s | 18261 | 11.72% 180/4620 | 630.33M | 5.557s | 1h25m | 113.43M/s | 20543 | 9.00% 185/4620 | 623.90M | 5.544s | 1h25m | 112.54M/s | 23110 | 6.57% 189/4620 | 617.48M | 5.413s | 1h23m | 114.07M/s | 25998 | 1.93% class | candidates | time | ETA | avg. rate | SievePrimes | CPU wait 192/4620 | 624.82M | 5.570s | 1h25m | 112.18M/s | 22748 | 7.19% 200/4620 | 618.40M | 5.477s | 1h23m | 112.91M/s | 25591 | 3.15% 201/4620 | 618.40M | 5.430s | 1h22m | 113.89M/s | 25591 | 2.21% 204/4620 | 618.40M | 5.495s | 1h23m | 112.54M/s | 25591 | 3.62% 212/4620 | 618.40M | 5.449s | 1h23m | 113.49M/s | 25591 | 2.70% 221/4620 | 618.40M | 5.436s | 1h22m | 113.76M/s | 25591 | 2.46% 224/4620 | 618.40M | 5.452s | 1h22m | 113.43M/s | 25591 | 2.77% 225/4620 | 618.40M | 5.437s | 1h22m | 113.74M/s | 25591 | 2.39% 236/4620 | 618.40M | 5.420s | 1h22m | 114.10M/s | 25591 | 2.39% 240/4620 | 618.40M | 5.447s | 1h22m | 113.53M/s | 25591 | 2.78% 245/4620 | 618.40M | 5.468s | 1h22m | 113.09M/s | 25591 | 3.17% 249/4620 | 618.40M | 5.476s | 1h22m | 112.93M/s | 25591 | 3.40% 257/4620 | 618.40M | 5.430s | 1h22m | 113.89M/s | 25591 | 2.60% 260/4620 | 618.40M | 5.481s | 1h22m | 112.83M/s | 25591 | 3.31% 261/4620 | 618.40M | 5.466s | 1h22m | 113.14M/s | 25591 | 3.12% 264/4620 | 618.40M | 5.416s | 1h21m | 114.18M/s | 25591 | 2.27% 269/4620 | 618.40M | 5.431s | 1h21m | 113.86M/s | 25591 | 2.61% 276/4620 | 618.40M | 5.423s | 1h21m | 114.03M/s | 25591 | 2.69% 281/4620 | 618.40M | 5.460s | 1h21m | 113.26M/s | 25591 | 3.15% 284/4620 | 618.40M | 5.442s | 1h21m | 113.63M/s | 25591 | 2.76% class | candidates | time | ETA | avg. rate | SievePrimes | CPU wait 297/4620 | 618.40M | 5.424s | 1h21m | 114.01M/s | 25591 | 2.47% 305/4620 | 618.40M | 5.502s | 1h22m | 112.40M/s | 25591 | 3.83% 309/4620 | 618.40M | 5.458s | 1h21m | 113.30M/s | 25591 | 2.87% 312/4620 | 618.40M | 5.515s | 1h22m | 112.13M/s | 25591 | 2.78% 317/4620 | 618.40M | 5.782s | 1h26m | 106.95M/s | 25591 | 0.57% 320/4620 | 625.74M | 8.418s | 2h05m | 74.33M/s | 22392 | 0.60% 324/4620 | 633.08M | 7.911s | 1h57m | 80.02M/s | 19593 | 0.62% 332/4620 | 640.42M | 7.762s | 1h55m | 82.51M/s | 17143 | 0.63% 336/4620 | 648.68M | 7.675s | 1h53m | 84.52M/s | 15000 | 0.75% 341/4620 | 656.93M | 7.385s | 1h49m | 88.96M/s | 13125 | 0.59% 344/4620 | 665.19M | 7.307s | 1h48m | 91.03M/s | 11484 | 0.74% 345/4620 | 673.45M | 7.173s | 1h46m | 93.89M/s | 10048 | 0.75% 357/4620 | 681.71M | 7.105s | 1h45m | 95.95M/s | 8792 | 0.77% 360/4620 | 690.88M | 6.871s | 1h41m | 100.55M/s | 7693 | 0.65% 365/4620 | 700.06M | 6.856s | 1h41m | 102.11M/s | 6731 | 1.24% 369/4620 | 709.23M | 6.672s | 1h38m | 106.30M/s | 5889 | 0.63% 372/4620 | 719.32M | 6.563s | 1h36m | 109.60M/s | 5152 | 1.13% 380/4620 | 721.16M | 6.514s | 1h35m | 110.71M/s | 5000 | 0.83% 381/4620 | 721.16M | 6.499s | 1h35m | 110.96M/s | 5000 | 0.86% 389/4620 | 721.16M | 6.524s | 1h35m | 110.54M/s | 5000 | 1.48% class | candidates | time | ETA | avg. rate | SievePrimes | CPU wait 392/4620 | 721.16M | 6.544s | 1h35m | 110.20M/s | 5000 | 2.11% 396/4620 | 721.16M | 6.516s | 1h35m | 110.67M/s | 5000 | 1.22% 401/4620 | 721.16M | 6.520s | 1h35m | 110.61M/s | 5000 | 0.86% [/CODE] |
it seems to by a memorybus bottleneck. i know that problem. here too with i7. try to reduce the cores of mprime step by step an show what mfaktc does. maybe monitor the gpu with gpu-z or another tool for perfomance.
Norman |
[QUOTE=NormanRKN;306285]it seems to by a memorybus bottleneck. i know that problem. here too with i7. try to reduce the cores of mprime step by step an show what mfaktc does. maybe monitor the gpu with gpu-z or another tool for perfomance.
Norman[/QUOTE] It's not a function of memory. It's a sharing of cores. Since there is less CPU processing available, less SP is sent to it. |
[QUOTE=bcp19;306290]It's not a function of memory. It's a sharing of cores. Since there is less CPU processing available, less SP is sent to it.[/QUOTE]
Yes but I am assuming that prime95 is running at a lower priority so there shouldn't be much difference cpu-wise. |
[QUOTE=henryzz;306317]Yes but I am assuming that prime95 is running at a lower priority so there shouldn't be much difference cpu-wise.[/QUOTE]
The simple fact that M/s climbs when mprime stops shows that the GPU is not running at it's full potential when both are running. Admittedly it's very minor (~1%), but is still there. When I first had my 2500K running, I noticed 1 core nearly exactly matched a 560, so I had a second core use mfakto on a 5770 sharing with P95 (mainly cause without P95 SP was 200,000 and cpu wait was over 15%) The timings showed P95 was using over 20% of the core and the SP balanced aroung 60,000. |
1 Attachment(s)
I actually have SievePrimes set to auto-adjust in the range 2000-10000, with a default of 5000; it rarely strays much from 5000 unless I'm doing a bunch of other work that spills onto mfaktc's cores, and even then only maybe down to 4500 or up to 6000.
Two instances of mfaktc, one GTX 570, locked to the first two physical cores of i7-3930K @4125MHz (the last 4 cores are doing P-1). GPU usage averages 95%. |
[QUOTE=James Heinrich;306573]I actually have SievePrimes set to auto-adjust in the range 2000-10000, with a default of 5000.......[/QUOTE]
Pardon, James. is this possible in mfaktc v 0.18? If so, how? Apologies if this has been covered before.:confused: |
[QUOTE=kladner;306582]Pardon, James. is this possible in mfaktc v 0.18?[/QUOTE]No.[quote=mfaktc changelog]version 0.19-pre9 (2012-07-08)
... other stuff ... - SievePrimesMin is lowered to 2000 (usually not very usefull but requested quiet often)[/quote] |
Thanks!
|
| All times are UTC. The time now is 07:32. |
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2021, Jelsoft Enterprises Ltd.