20141216, 21:39  #1 
Mar 2014
2^{4}·3 Posts 
"Proven reliable"
My account has been set to receive small exponents for some time, and on my old machine, I have been receiving them.
My new machine has been assigned larger exponents  no surprise there, that machine hasn't returned any results yet. The rules say the machine has to return 2 results in 90 days, and be "proven reliable," before small exponents will be offered. What criteria does PrimeNet use to decide when a computer has proven reliable? 
20141217, 01:45  #2 
∂^{2}ω=0
Sep 2002
República de California
2·3^{3}·5·43 Posts 
Small exponents means doublechecks  thus 2 successful (matching the 1st run done by some other machine previously) DCs in 90 days sounds like the operative criterion.

20141217, 02:50  #3 
May 2013
East. Always East.
11·157 Posts 
I think small in this case meaning the trailing edge of the "wave", as in the lowest 1000 DC's and lowest 2000 LL's or whatever the category sizes were.

20150319, 22:47  #4  
Aug 2002
Rovereto (Italy)
237_{8} Posts 
Quote:
Recentely I decided to DC... 5 out 7 of my DCs were matching precedent results of other users and 2 were not. In the cpu properties page the Reliability dropped from 1 to 0.98 and Confidence is now = to 4. What does it mean? Apparentely my runs didn't show errors... Apparentely. What if a third check will show that later (second) result was correct (i.e. matching the third one) so that one may suppose that it was the first run to be wrong? Last fiddled with by guido72 on 20150319 at 22:48 

20150320, 03:07  #5 
1976 Toyota Corona years forever!
"Wayne"
Nov 2006
Saskatchewan, Canada
2×3^{2}×11×23 Posts 
I just went through this ... I am relatively certain that what is required is that each core of each CPU that you want to get preferred assignments only needs to complete 2 LL or DC tests (of an size) without returning an error code or otherwise suspicious result.
Matching residues is NOT necessary; you might be the first to test this exponent.....though mismatched residues MAY be an indication that your result is suspicious. So for example if Core #1 finishes 2 DC's (or LL's) without error it will be eligible for preferred LL or DC assignments. However, if Cores 24 still have NOT completed their 2 each they will continue to get nonpreferred until done so. 
20150320, 03:51  #6 
P90 years forever!
Aug 2002
Yeehaw, FL
2·13·283 Posts 
petrw1 is close to right. Your computer's reliability is strictly governed by returning error free results. I don't think reliability can exceed 0.98.
A four core machine running four workers will get preferred assignments once it has completed 8 LLs. Petrw1 indicated the preferred assignments algorithm is calculated for each core, instead it is calculated for each computer. 
20150320, 05:48  #7  
May 2013
East. Always East.
11·157 Posts 
Quote:
The reliability is a measure of the "quality" of the results submitted by that CPU, i.e. their tendency to be errorfree. If you have errors (or presumably if you return a bad residue) then your reliability goes down because we're not sure if your results are good or not. You can think of it as percentage: "We're 0.98 = 98% confident that your results are good". A reliability of 1.00 is not achievable (except when your confidence is 0; more on that later) because we can never be 100% sure of anything in this field. The confidence is measure of, well, the confidence in your reliability rating. It is capped at 10, but for all intents and purposes it could go higher. It increases by 1 every time you submit a result, regardless of if it has an error or not. A higher number means we're more confident in the reliability rating we give you. So for example, if you are at R=0.98, C=4, it means that you have a 98% reliability rating based on the submission of your last 4 results. At C=10, your reliability of 0.98 is "stronger" because you have more good results to back it up. Last fiddled with by TheMawn on 20150320 at 05:49 

20150320, 07:50  #8 
Aug 2002
Rovereto (Italy)
3×53 Posts 
Thanks you all for clarifying. Except the error(s) code recorded by Prime95 (I love this name. It gives that sort of good feeling you get from something being always immutable but getting always better and better, like downs or sunsets...) what was I'm saying? Ah, yes: is there some other source of info about the quality of a run? So that one may decide to stop it before its end and restart it (assuming that the human behind get aware of this)?
George, have you ever thought about a features like this for your client? Hey Bud: your hardware is giving errors and errors! You'd better stop crunchin', do something and then try again! Do not waste your time! 
20150320, 17:46  #9 
May 2013
East. Always East.
11×157 Posts 
The software does have some degree of error detection builtin but beyond that there isn't much that can be done.
If you gave me the interim residue after the 10,000,000th iteration of some test you're working on now, not a single one of us could do anything to tell you if it looks good or bad. The residues look completely random from iteration to iteration. The error codes you mentioned ARE the way we can estimate the quality of the run. Part of the algorithm is squaring a very big number which takes a lot of resource. To speed this up, the Fast Fourier Transform (FFT) is used but the drawback is that it brings us into decimals territory. This is a gross simplification, but it essentially says 11^{2} = 121.198... which rounds to 121. Now instead of using 11, you're using an X million digit number. Larger FFT's are slower but more precise so we use the smallest we can. In my example you could use a bigger FFT to get 11^{2} = 121.034... but that still gives 121 so we could have stayed with the smaller one. However, if we get 13^{2} = 169.498... we're not sure if that's 169 or 170. That would technically round to 169 (and be correct) but we're really not certain here. We would use a larger FFT to make sure we're never more than 0.4 away from the nearest whole number. If we ever are too far from the nearest whole number, the iteration is attempted again just to be sure, possibly with a larger FFT. This "roundoff error" can often be reproduced but if it can't, that right there is a guarantee that your CPU messed something up and is less trustworthy as a result. Any nonreproducible roundoff error triggers an error code. There are other codes as well but you get the idea. The problem is your CPU might mess up so badly that it gets 14^{2} = 180.110... which rounds to 180 nicely, but that's just plain wrong. We could try squaring again but that would require us to assume that EVERY iteration is wrong until proven otherwise which would be horribly time consuming. Really, if we could identify a bad residue midrun, it probably means we know what a good residue is. If we knew that, we wouldn't need to crunch. 
20150320, 18:36  #10 
"Kieren"
Jul 2011
In My Own Galaxy!
2^{2}·2,539 Posts 
I thought that egregious errors would halt the worker. I seem to remember that happening during a spell when a failing PSU caused lots of errors in DC for me.

20150320, 19:39  #11 
Aug 2002
Rovereto (Italy)
3×53 Posts 
This is conclusive and uncontradictable. I was just thinkin' about a feature that automatically stops the test just in case of a predefined sequence of errors, just to avoid useless results and a consequent waste of energy and time...
Last fiddled with by guido72 on 20150320 at 19:45 
Thread Tools  
Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Stockfish game: "Move 8 poll", not "move 3.14159 discussion"  MooMoo2  Other Chess Games  5  20161022 01:55 
AouessareEl HaddouchiEssaaidi "test": "if Mp has no factor, it is prime!"  wildrabbitt  Miscellaneous Math  11  20150306 08:17 
"e=mc^2: 103 years later, Einstein's proven right"  jinydu  Science & Technology  16  20081126 15:13 
Would Minimizing "iterations between results file" may reveal "is not prime" earlier?  nitai1999  Software  7  20040826 18:12 