mersenneforum.org  

Go Back   mersenneforum.org > Great Internet Mersenne Prime Search > PrimeNet > GPU to 72

Closed Thread
 
Thread Tools
Old 2020-03-26, 18:46   #78
rebirther
 
rebirther's Avatar
 
Sep 2011
Germany

17·139 Posts
Default

Quote:
Originally Posted by chalsall View Post
Thanks for the update. Can't wait to see production!

For my own edification (I truly don't know a thing about BOINC)... Why are you doing empirical for the temporal domain? The time it takes for each run will approximately double for each bit level, while decreasing the higher the candidate. As in, a 95M will take more time than a 102M to the same bit level (on, of course, the same GPU). This is what is codified in the Perl I gave you.

There is considerable difference in run times between different GPUs on the same work. Does BOINC consider "wall-clock-time" the "value" (regardless of throughput), unlike GIMPS which awards credit as a function of the work done?

Consider me a (somewhat strange) stranger in a strange land...
I need to calculate the range on my reference card. Any cards which are faster can earn more credits than slower cards. Fixed credits are needed against cheating.
rebirther is offline  
Old 2020-03-26, 19:35   #79
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

32·101 Posts
Default

Quote:
Originally Posted by rebirther View Post
I need to calculate the range on my reference card. Any cards which are faster can earn more credits than slower cards. Fixed credits are needed against cheating.
Yes, protection against cheating is good. However, I think we hit a language barrier again.

Please Reb, there is no need to calculate on referencecard. You know now, that n~=105,000,000 for 72 to 73 bit is equal to 1500 BOINC credits. This means that for bit 73 to 74 it takes 3000 BOINC credits, for 74 to 75 it takes 6000 BOINC credits, for 75 to 76 it takes 12000 BOINC credit, for 76 to 77 it takes 24000 BOINC credit and for 77 to 78 it takes 48000 BOINC credit.

On a GTX 1070 card it takes ~1000 seconds to test 72 to 73 bit, ~2000 seconds to test 73 to 74 bit, ~4000 seconds to test 74 to 75 bit, ~8000 seconds to test 75 to 76 bit, ~16000 seconds to test 76 to 77 bit and ~32000 seconds to test 77 to 78 bit for n~=105,000,000

At n~=210,000,000 testing time and credit is for the same level of bit 50% of what it was at n~=105,000,000
At n~=420,000,000 testing time and credit is for the same level of bit 25% of what it was at n~=105,000,000
At n~=840,000,000 testing time and credit is for the same level of bit 12.5% of what it was at n~=105,000,000

So as you can see, there really is not that big a need for testing on a reference card
KEP is offline  
Old 2020-03-27, 03:26   #80
axn
 
axn's Avatar
 
Jun 2003

107618 Posts
Default

Quote:
Originally Posted by KEP View Post
Please Reb, there is no need to calculate on referencecard. You know now, that n~=105,000,000 for 72 to 73 bit is equal to 1500 BOINC credits. This means that for bit 73 to 74 it takes 3000 BOINC credits, for 74 to 75 it takes 6000 BOINC credits, for 75 to 76 it takes 12000 BOINC credit, for 76 to 77 it takes 24000 BOINC credit and for 77 to 78 it takes 48000 BOINC credit.

On a GTX 1070 card it takes ~1000 seconds to test 72 to 73 bit, ~2000 seconds to test 73 to 74 bit, ~4000 seconds to test 74 to 75 bit, ~8000 seconds to test 75 to 76 bit, ~16000 seconds to test 76 to 77 bit and ~32000 seconds to test 77 to 78 bit for n~=105,000,00
Be careful here. At different bit levels, the program may use different kernels, and as a result, at higher bit levels it might take more time than the simple doubling you used. To complicate matters, AMD card (mfakto) has different points at which the kernel change happens compared to Nvidia (mfaktc)

If the objective is to ensure that a given GPU earns the same credit/hour regardless of bit level, you absolutely must benchmark every bit level on reference cards. Not only that, you might find that AMD card earns different credit that Nvidia card for the same WU.

Last fiddled with by axn on 2020-03-27 at 03:27
axn is offline  
Old 2020-03-27, 05:45   #81
LaurV
Romulan Interpreter
 
LaurV's Avatar
 
Jun 2011
Thailand

22×2,137 Posts
Default

Mainly, Chris and Kep say the same thing with different words.

A testing on any card should not be needed. The "credit" should only depend on the output, regardless of what hardware one has.

Both methods of calculating, from Kep and Chris, are satisfactory. Nobody would care about one or two credits more or less.

Does BOINC clients report the hardware they have? If so, a test on a "reference" card could be done to be able to check if the reported results are in the reasonable "ballpark", but that is not mandatory, and not much relevant. A cheater could still use his card for other things half of the time ans still cheat, if he wants. That's why I said that the number of factors reported should be watched.

Reb's English looks quite good to me (non-native too).


Let's see the production.

Last fiddled with by LaurV on 2020-03-27 at 05:46
LaurV is offline  
Old 2020-03-27, 09:22   #82
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

32·101 Posts
Default

Quote:
Originally Posted by LaurV View Post
Does BOINC clients report the hardware they have? If so, a test on a "reference" card could be done to be able to check if the reported results are in the reasonable "ballpark", but that is not mandatory, and not much relevant. A cheater could still use his card for other things half of the time ans still cheat, if he wants. That's why I said that the number of factors reported should be watched.
No, they do not directly report that. You can however see on the computerspecs what GPU each computer has, but it may not be nescessary, because over all what means a few credit more or less

Yes Rebs english is well enough, sometimes just like you mentioned with credit, we explain ourself with different words, but do in fact say almost or exactly the same as the other ... that's one of the oddities of human language
KEP is offline  
Old 2020-03-27, 09:32   #83
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

32·101 Posts
Default

Quote:
Originally Posted by axn View Post
Be careful here. At different bit levels, the program may use different kernels, and as a result, at higher bit levels it might take more time than the simple doubling you used. To complicate matters, AMD card (mfakto) has different points at which the kernel change happens compared to Nvidia (mfaktc)

If the objective is to ensure that a given GPU earns the same credit/hour regardless of bit level, you absolutely must benchmark every bit level on reference cards. Not only that, you might find that AMD card earns different credit that Nvidia card for the same WU.
Okay, but will it be more than 10% slowdown at higher bit levels?

My ancient (6-8 year old ASUS gpu, now retired), only showed 10% slowdown at 75+ bit.

Maybe it is a good idea, that Reb benchmark all bit levels at n=105M and then we use the credit at each bit level for n=105M, as reference to calculate the credit for future test n. Will that be more accurate?

Does higher n also need to be benchmarked or is the ((105M/test_n)*credit_at_current_bit_level_for_n=105M) still accurate enough for n=999M?
KEP is offline  
Old 2020-03-27, 10:33   #84
axn
 
axn's Avatar
 
Jun 2003

3·1,531 Posts
Default

Quote:
Originally Posted by KEP View Post
Does higher n also need to be benchmarked or is the ((105M/test_n)*credit_at_current_bit_level_for_n=105M) still accurate enough for n=999M?
No need for different n. As you said, it is just simple scaling. But bit levels, yes. And for Nvidia and AMD differently.

However, I am not sure the exact difference in performance -- probably on the order of 10-15%.
axn is offline  
Old 2020-03-27, 11:26   #85
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

16158 Posts
Default

Quote:
Originally Posted by axn View Post
No need for different n. As you said, it is just simple scaling. But bit levels, yes. And for Nvidia and AMD differently.

However, I am not sure the exact difference in performance -- probably on the order of 10-15%.
But will that difference not just mean that the Nvidia cards make more than the AMD cards and therefor also should and will earn more credit a day than an AMD card does? I don't really think that it will be possible to make differentiated (? spelling) credit for Nvidia versus AMD cards.
KEP is offline  
Old 2020-03-27, 11:42   #86
axn
 
axn's Avatar
 
Jun 2003

459310 Posts
Default

Quote:
Originally Posted by KEP View Post
I don't really think that it will be possible to make differentiated (? spelling) credit for Nvidia versus AMD cards.
Hmmm... This could be an issue. I guess, once you have benchmark data, then the extent of the issue can be quantified.
axn is offline  
Old 2020-03-27, 12:50   #87
KEP
Quasi Admin Thing
 
KEP's Avatar
 
May 2005

32·101 Posts
Default

Quote:
Originally Posted by axn View Post
Hmmm... This could be an issue. I guess, once you have benchmark data, then the extent of the issue can be quantified.
Yeah let's wait and see. I just assume without having anything to base it upon, that if the Nvidia cards are faster, they just make more work and hence get more credit.

Am I missing something or are the amount of calculations for the exact same n, not the same for an AMD and Nvidia card? ... if yes, then credit should scale to both cards, since the Nvidia will compute more per day and then get more credit compared to the AMD that will compute less
KEP is offline  
Old 2020-03-27, 13:47   #88
chalsall
If I May
 
chalsall's Avatar
 
"Chris Halsall"
Sep 2002
Barbados

232216 Posts
Default

Quote:
Originally Posted by KEP View Post
Am I missing something or are the amount of calculations for the exact same n, not the same for an AMD and Nvidia card?
That is correct. In fact, the equation I gave above was originally designed for calculating the credit for CPU TF'ing. There was a bit if a lengthly debate years ago about whether there should be scaling applied to give GPUs less credit, because they are ***soooo*** much faster at the work.
chalsall is offline  
Closed Thread

Thread Tools


Similar Threads
Thread Thread Starter Forum Replies Last Post
Chess World Championship Match -- 2013, 2014, 2016 Raman Chess 34 2016-12-01 01:59
mprime ETA and primenet "days to go" do not match blip Software 1 2015-11-20 16:43
less v4 reservations being made tha PrimeNet 8 2008-08-14 08:26
LL test doesn't match benchmark drew Hardware 12 2008-07-26 03:50
WE MADE IT!!!!!!!!!!!!!!!!!!!!!! eric_v Twin Prime Search 89 2007-01-23 15:33

All times are UTC. The time now is 20:05.

Fri Jun 5 20:05:36 UTC 2020 up 72 days, 17:38, 1 user, load averages: 2.50, 1.81, 1.62

Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.

This forum has received and complied with 0 (zero) government requests for information.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.