mersenneforum.org GPU72 and BOINC a match made in ....
 Register FAQ Search Today's Posts Mark Forums Read

2020-03-26, 18:46   #78
rebirther

Sep 2011
Germany

44738 Posts

Quote:
 Originally Posted by chalsall Thanks for the update. Can't wait to see production! For my own edification (I truly don't know a thing about BOINC)... Why are you doing empirical for the temporal domain? The time it takes for each run will approximately double for each bit level, while decreasing the higher the candidate. As in, a 95M will take more time than a 102M to the same bit level (on, of course, the same GPU). This is what is codified in the Perl I gave you. There is considerable difference in run times between different GPUs on the same work. Does BOINC consider "wall-clock-time" the "value" (regardless of throughput), unlike GIMPS which awards credit as a function of the work done? Consider me a (somewhat strange) stranger in a strange land...
I need to calculate the range on my reference card. Any cards which are faster can earn more credits than slower cards. Fixed credits are needed against cheating.

2020-03-26, 19:35   #79
KEP

May 2005

32·101 Posts

Quote:
 Originally Posted by rebirther I need to calculate the range on my reference card. Any cards which are faster can earn more credits than slower cards. Fixed credits are needed against cheating.
Yes, protection against cheating is good. However, I think we hit a language barrier again.

Please Reb, there is no need to calculate on referencecard. You know now, that n~=105,000,000 for 72 to 73 bit is equal to 1500 BOINC credits. This means that for bit 73 to 74 it takes 3000 BOINC credits, for 74 to 75 it takes 6000 BOINC credits, for 75 to 76 it takes 12000 BOINC credit, for 76 to 77 it takes 24000 BOINC credit and for 77 to 78 it takes 48000 BOINC credit.

On a GTX 1070 card it takes ~1000 seconds to test 72 to 73 bit, ~2000 seconds to test 73 to 74 bit, ~4000 seconds to test 74 to 75 bit, ~8000 seconds to test 75 to 76 bit, ~16000 seconds to test 76 to 77 bit and ~32000 seconds to test 77 to 78 bit for n~=105,000,000

At n~=210,000,000 testing time and credit is for the same level of bit 50% of what it was at n~=105,000,000
At n~=420,000,000 testing time and credit is for the same level of bit 25% of what it was at n~=105,000,000
At n~=840,000,000 testing time and credit is for the same level of bit 12.5% of what it was at n~=105,000,000

So as you can see, there really is not that big a need for testing on a reference card

2020-03-27, 03:26   #80
axn

Jun 2003

3×1,531 Posts

Quote:
 Originally Posted by KEP Please Reb, there is no need to calculate on referencecard. You know now, that n~=105,000,000 for 72 to 73 bit is equal to 1500 BOINC credits. This means that for bit 73 to 74 it takes 3000 BOINC credits, for 74 to 75 it takes 6000 BOINC credits, for 75 to 76 it takes 12000 BOINC credit, for 76 to 77 it takes 24000 BOINC credit and for 77 to 78 it takes 48000 BOINC credit. On a GTX 1070 card it takes ~1000 seconds to test 72 to 73 bit, ~2000 seconds to test 73 to 74 bit, ~4000 seconds to test 74 to 75 bit, ~8000 seconds to test 75 to 76 bit, ~16000 seconds to test 76 to 77 bit and ~32000 seconds to test 77 to 78 bit for n~=105,000,00
Be careful here. At different bit levels, the program may use different kernels, and as a result, at higher bit levels it might take more time than the simple doubling you used. To complicate matters, AMD card (mfakto) has different points at which the kernel change happens compared to Nvidia (mfaktc)

If the objective is to ensure that a given GPU earns the same credit/hour regardless of bit level, you absolutely must benchmark every bit level on reference cards. Not only that, you might find that AMD card earns different credit that Nvidia card for the same WU.

Last fiddled with by axn on 2020-03-27 at 03:27

 2020-03-27, 05:45 #81 LaurV Romulan Interpreter     Jun 2011 Thailand 205448 Posts Mainly, Chris and Kep say the same thing with different words. A testing on any card should not be needed. The "credit" should only depend on the output, regardless of what hardware one has. Both methods of calculating, from Kep and Chris, are satisfactory. Nobody would care about one or two credits more or less. Does BOINC clients report the hardware they have? If so, a test on a "reference" card could be done to be able to check if the reported results are in the reasonable "ballpark", but that is not mandatory, and not much relevant. A cheater could still use his card for other things half of the time ans still cheat, if he wants. That's why I said that the number of factors reported should be watched. Reb's English looks quite good to me (non-native too). Let's see the production. Last fiddled with by LaurV on 2020-03-27 at 05:46
2020-03-27, 09:22   #82
KEP

May 2005

32·101 Posts

Quote:
 Originally Posted by LaurV Does BOINC clients report the hardware they have? If so, a test on a "reference" card could be done to be able to check if the reported results are in the reasonable "ballpark", but that is not mandatory, and not much relevant. A cheater could still use his card for other things half of the time ans still cheat, if he wants. That's why I said that the number of factors reported should be watched.
No, they do not directly report that. You can however see on the computerspecs what GPU each computer has, but it may not be nescessary, because over all what means a few credit more or less

Yes Rebs english is well enough, sometimes just like you mentioned with credit, we explain ourself with different words, but do in fact say almost or exactly the same as the other ... that's one of the oddities of human language

2020-03-27, 09:32   #83
KEP

May 2005

32·101 Posts

Quote:
 Originally Posted by axn Be careful here. At different bit levels, the program may use different kernels, and as a result, at higher bit levels it might take more time than the simple doubling you used. To complicate matters, AMD card (mfakto) has different points at which the kernel change happens compared to Nvidia (mfaktc) If the objective is to ensure that a given GPU earns the same credit/hour regardless of bit level, you absolutely must benchmark every bit level on reference cards. Not only that, you might find that AMD card earns different credit that Nvidia card for the same WU.
Okay, but will it be more than 10% slowdown at higher bit levels?

My ancient (6-8 year old ASUS gpu, now retired), only showed 10% slowdown at 75+ bit.

Maybe it is a good idea, that Reb benchmark all bit levels at n=105M and then we use the credit at each bit level for n=105M, as reference to calculate the credit for future test n. Will that be more accurate?

Does higher n also need to be benchmarked or is the ((105M/test_n)*credit_at_current_bit_level_for_n=105M) still accurate enough for n=999M?

2020-03-27, 10:33   #84
axn

Jun 2003

459310 Posts

Quote:
 Originally Posted by KEP Does higher n also need to be benchmarked or is the ((105M/test_n)*credit_at_current_bit_level_for_n=105M) still accurate enough for n=999M?
No need for different n. As you said, it is just simple scaling. But bit levels, yes. And for Nvidia and AMD differently.

However, I am not sure the exact difference in performance -- probably on the order of 10-15%.

2020-03-27, 11:26   #85
KEP

May 2005

32×101 Posts

Quote:
 Originally Posted by axn No need for different n. As you said, it is just simple scaling. But bit levels, yes. And for Nvidia and AMD differently. However, I am not sure the exact difference in performance -- probably on the order of 10-15%.
But will that difference not just mean that the Nvidia cards make more than the AMD cards and therefor also should and will earn more credit a day than an AMD card does? I don't really think that it will be possible to make differentiated (? spelling) credit for Nvidia versus AMD cards.

2020-03-27, 11:42   #86
axn

Jun 2003

3×1,531 Posts

Quote:
 Originally Posted by KEP I don't really think that it will be possible to make differentiated (? spelling) credit for Nvidia versus AMD cards.
Hmmm... This could be an issue. I guess, once you have benchmark data, then the extent of the issue can be quantified.

2020-03-27, 12:50   #87
KEP

May 2005

38D16 Posts

Quote:
 Originally Posted by axn Hmmm... This could be an issue. I guess, once you have benchmark data, then the extent of the issue can be quantified.
Yeah let's wait and see. I just assume without having anything to base it upon, that if the Nvidia cards are faster, they just make more work and hence get more credit.

Am I missing something or are the amount of calculations for the exact same n, not the same for an AMD and Nvidia card? ... if yes, then credit should scale to both cards, since the Nvidia will compute more per day and then get more credit compared to the AMD that will compute less

2020-03-27, 13:47   #88
chalsall
If I May

"Chris Halsall"
Sep 2002

5·7·257 Posts

Quote:
 Originally Posted by KEP Am I missing something or are the amount of calculations for the exact same n, not the same for an AMD and Nvidia card?
That is correct. In fact, the equation I gave above was originally designed for calculating the credit for CPU TF'ing. There was a bit if a lengthly debate years ago about whether there should be scaling applied to give GPUs less credit, because they are ***soooo*** much faster at the work.

 Similar Threads Thread Thread Starter Forum Replies Last Post Raman Chess 34 2016-12-01 01:59 blip Software 1 2015-11-20 16:43 tha PrimeNet 8 2008-08-14 08:26 drew Hardware 12 2008-07-26 03:50 eric_v Twin Prime Search 89 2007-01-23 15:33

All times are UTC. The time now is 08:20.

Sat Jun 6 08:20:06 UTC 2020 up 73 days, 5:53, 0 users, load averages: 0.68, 1.13, 1.24