View Single Post
Old 2020-05-02, 21:49   #56
SethTro
 
SethTro's Avatar
 
"Seth"
Apr 2019

2658 Posts
Default

Quote:
Originally Posted by Uncwilly View Post
My 2 centavos is that any overhead that is not error checking related, should not be more than 0.5%. The loss vs. gain doesn't make sense.
I'm the only person working on this and I haven't focused on optimizing code, that can happen after people have been convinced this is worth doing, which has proved to be an uphill struggle.

Quote:
Originally Posted by Uncwilly View Post
What is the potential gain (speed-wise) to the project? Based upon the examination of "expected number of factors", we don't have systematic cheating. We have had a few idiots. Total loss due to them nearly nil.
Loss comes in many forms, it can be ghz-days, it can be human hours, it can be trust in the project.
You are correct that this won't change the overall speed in any significant way today.
It is my belief that this would increase the reproducibility and overall trust in the project.
My goal would be that TJAOI stops finding bad results, and that no one reruns TF results because the don't trust old results.

Quote:
Originally Posted by Uncwilly View Post
Is this an actual residual of the calculation, or is it just a manufactured calculation? It is not a natural byproduct of the calculation like the LL residual.
It is a natural byproduct of the calculation.
It feels strange for you to not understand this, so maybe your claim is that keeping track of the smallest residual is not a natural byproduct?

Quote:
Originally Posted by Uncwilly View Post
This is my very naive concept.
It doesn't feel good to spend time and effort working on something then have someone call it bad. The use of very feels unrestrained.

Quote:
Originally Posted by Uncwilly View Post
Why can't the program take the result to the mod operation at certain select k values and do a rolling operation on them like a CRC? Or certain classes. The k values to be selected could be calculated based upon the exponent. Doing the operation every 10000 k's tested should not put an undue burden on operation and would prove that the at least those items were done in sequence.
I think this comes back to what we are worried about.

I'm worried about them in this order

1. Having any verifiable result associated with each TF submission.
1a. Result(s) should be quickly verifiable by the server (e.g. not requiring a 2nd DC)
1b. Result(s) should cover a large percent of work done.
2. Detect when the GPU/Program is not working quickly (e.g. after 10 submissions not 200)
3. Prevent Malicious/Prank submissions (I'd wager I can easily submit 10000 results at the cost of zero compute and have none of the be detected as fake)

Your method provides 1, 1a, and helps with 2, 3
My method provides stronger guarantees on 1b, 2 and 3.
SethTro is offline   Reply With Quote