![]() |
|
|
#78 | |
|
Sep 2011
Germany
22×32×79 Posts |
Quote:
|
|
|
|
|
|
#79 | |
|
Quasi Admin Thing
May 2005
11110001102 Posts |
Quote:
Please Reb, there is no need to calculate on referencecard. You know now, that n~=105,000,000 for 72 to 73 bit is equal to 1500 BOINC credits. This means that for bit 73 to 74 it takes 3000 BOINC credits, for 74 to 75 it takes 6000 BOINC credits, for 75 to 76 it takes 12000 BOINC credit, for 76 to 77 it takes 24000 BOINC credit and for 77 to 78 it takes 48000 BOINC credit. On a GTX 1070 card it takes ~1000 seconds to test 72 to 73 bit, ~2000 seconds to test 73 to 74 bit, ~4000 seconds to test 74 to 75 bit, ~8000 seconds to test 75 to 76 bit, ~16000 seconds to test 76 to 77 bit and ~32000 seconds to test 77 to 78 bit for n~=105,000,000 At n~=210,000,000 testing time and credit is for the same level of bit 50% of what it was at n~=105,000,000 At n~=420,000,000 testing time and credit is for the same level of bit 25% of what it was at n~=105,000,000 At n~=840,000,000 testing time and credit is for the same level of bit 12.5% of what it was at n~=105,000,000 So as you can see, there really is not that big a need for testing on a reference card
|
|
|
|
|
|
#80 | |
|
Jun 2003
5,051 Posts |
Quote:
If the objective is to ensure that a given GPU earns the same credit/hour regardless of bit level, you absolutely must benchmark every bit level on reference cards. Not only that, you might find that AMD card earns different credit that Nvidia card for the same WU. Last fiddled with by axn on 2020-03-27 at 03:27 |
|
|
|
|
|
#81 |
|
Romulan Interpreter
Jun 2011
Thailand
2·5·312 Posts |
Mainly, Chris and Kep say the same thing with different words.
A testing on any card should not be needed. The "credit" should only depend on the output, regardless of what hardware one has. Both methods of calculating, from Kep and Chris, are satisfactory. Nobody would care about one or two credits more or less. Does BOINC clients report the hardware they have? If so, a test on a "reference" card could be done to be able to check if the reported results are in the reasonable "ballpark", but that is not mandatory, and not much relevant. A cheater could still use his card for other things half of the time ans still cheat, if he wants. That's why I said that the number of factors reported should be watched. Reb's English looks quite good to me (non-native too). Let's see the production.
Last fiddled with by LaurV on 2020-03-27 at 05:46 |
|
|
|
|
#82 | |
|
Quasi Admin Thing
May 2005
2×3×7×23 Posts |
Quote:
![]() Yes Rebs english is well enough, sometimes just like you mentioned with credit, we explain ourself with different words, but do in fact say almost or exactly the same as the other ... that's one of the oddities of human language
|
|
|
|
|
|
#83 | |
|
Quasi Admin Thing
May 2005
11110001102 Posts |
Quote:
My ancient (6-8 year old ASUS gpu, now retired), only showed 10% slowdown at 75+ bit. Maybe it is a good idea, that Reb benchmark all bit levels at n=105M and then we use the credit at each bit level for n=105M, as reference to calculate the credit for future test n. Will that be more accurate? Does higher n also need to be benchmarked or is the ((105M/test_n)*credit_at_current_bit_level_for_n=105M) still accurate enough for n=999M? |
|
|
|
|
|
#84 | |
|
Jun 2003
5,051 Posts |
Quote:
However, I am not sure the exact difference in performance -- probably on the order of 10-15%. |
|
|
|
|
|
#85 |
|
Quasi Admin Thing
May 2005
2·3·7·23 Posts |
But will that difference not just mean that the Nvidia cards make more than the AMD cards and therefor also should and will earn more credit a day than an AMD card does? I don't really think that it will be possible to make differentiated (? spelling) credit for Nvidia versus AMD cards.
|
|
|
|
|
#86 |
|
Jun 2003
10011101110112 Posts |
|
|
|
|
|
#87 | |
|
Quasi Admin Thing
May 2005
2·3·7·23 Posts |
Quote:
Am I missing something or are the amount of calculations for the exact same n, not the same for an AMD and Nvidia card? ... if yes, then credit should scale to both cards, since the Nvidia will compute more per day and then get more credit compared to the AMD that will compute less
|
|
|
|
|
|
#88 |
|
If I May
"Chris Halsall"
Sep 2002
Barbados
100110000000102 Posts |
That is correct. In fact, the equation I gave above was originally designed for calculating the credit for CPU TF'ing. There was a bit if a lengthly debate years ago about whether there should be scaling applied to give GPUs less credit, because they are ***soooo*** much faster at the work.
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Chess World Championship Match -- 2013, 2014, 2016 | Raman | Chess | 34 | 2016-12-01 01:59 |
| mprime ETA and primenet "days to go" do not match | blip | Software | 1 | 2015-11-20 16:43 |
| less v4 reservations being made | tha | PrimeNet | 8 | 2008-08-14 08:26 |
| LL test doesn't match benchmark | drew | Hardware | 12 | 2008-07-26 03:50 |
| WE MADE IT!!!!!!!!!!!!!!!!!!!!!! | eric_v | Twin Prime Search | 89 | 2007-01-23 15:33 |