![]() |
|
|
#45 | |
|
Romulan Interpreter
Jun 2011
Thailand
7×1,373 Posts |
Quote:
![]() Spotless account would not be possible anymore, because I was involved in early cudaLucas testing and there was a version with a bug, which caused many of my residues to be wrong. This is how the bugs are found... In fact about 90% of my bad residues are from that period (a very short period of time, within a month or so). The gods to be won't erase them, so I am still waiting for the time when I will visit the States, to ambush George in a shadow corner in the night, when he comes back from work, and point a transistor to him and ask him to remove the bad history from my account... ![]() Few others are from the time when I had over 40 computers in the project, and could not keep track of all. As I moved away from IT, and more into R&D in my company, those went MIA and lost along the years. More recently my GIMPS activity came from few heavy-loaded GPUs I have/had at home, for which, yes, I was running cudaLL tests in parallel in two cards. Contrarily to what some people may comment here, this actually saved quite some time for the project, including Madpoo's triple checks, and personally it saved ME a lot of time, and it gave me peace of mind, and some more credits too ![]() That is because the alternative should have been either to report enough bad results to make a headache for the other people, who would have to run DC and TC (and sometimes QC) or either to reduce my overclock (which still can not guarantee 100% correct tests) or stop testing at all. Think about it, when you run one test after the other (same test), if the first is wrong in the middle, or in the beginning, all the time is lost, both for the project, and for yourself. Especially for yourself, you just lost some time when you could do something else, or restart from a known good result, if you could know that some particular iteration was wrong. With the right script, and two GPUs crunching the same exponent, you can find immediately that something was wrong, and properly resume. This way, you only waste a little time (mainly due to the fact that one card is faster than the other, and when you spot the mistake, the faster card already advanced a couple of checkpoints, which you have to discard, because you don't know which residue is wrong, that from the faster card, or that from the slower). So, generally, you do not spend double time to do LL+DC in the same time, you only spend about 30% to 50% longer (due to properly resume, avoiding running wrong tests to the end, being able to overclock higher, etc), and at the end you have an exponent which you are sure that it has the correct residue, and is also (improper) DC-ed. (I say improper because from the project point of view, someone else still has to do the DC). For "old salts" like me, you should make an exception in the sense that you should not jump immediately and do TC for the self-DC-ed exponents. I "guarantee" the residue in that case (well, with the reservation that someone else could report in my name, but this is easy to check for the DB admins who have access to computer's and assignment's characteristics) and my long activity and the fact that there are people here who know me in real life, should stand for me. Therefore, like I said in the past, you can schedule my self-DC for some "low priority" TC in the future, to make it legally, but jumping immediately to TC them, i.e. putting me on the same balance with the anonymous users or beginners who self-DC? C'mon! hehe This is a gain for the project, trust me. And a gain for me. The only negative side is that you gave me more credit (from the project point of view, you "paid" a DC work which has to be anyhow done by someone else, and it will need to be "paid" again to that user - but you don't take the GHzDays from the Bank of America... - from my point of view, I did both the LL and DC, my hardware, my time, my electricity bill, so I think is fair to get both credits ). There was a time when I had so much GPU power that the CPU was bottlenecking all my systems... Generally this was when mining bitcoins was not worth anymore withthe GPUs, because of ASICs taking over the market. That time I had GPU power and nothing to do with it... Not anymore. Please remark that I didn't "DC myself" very frequently, especially I didn't do it in the recent times, when my GPU power is slowly equalized by my CPU power (due to some GPUs destroyed or taken out of use, and buying some new i7-6950X). Going to production line, the break is over. We need to test 400 boards of some motor controller. Grrr... Last fiddled with by LaurV on 2017-05-26 at 09:10 |
|
|
|
|
|
|
#46 | |
|
Romulan Interpreter
Jun 2011
Thailand
258B16 Posts |
Yes, they are, therefore, permanently masking the residues would avoid all this discussion, and the "free TC". What is used for, seeing all the residue? I know I have a match if I did a honest DC and saw the first 14 hex digits matching. Some people here (not me) do honest TCs.
Quote:
|
|
|
|
|
|
|
#47 |
|
"Kieren"
Jul 2011
In My Own Galaxy!
2×3×1,693 Posts |
|
|
|
|
|
|
#48 | |
|
Serpentine Vermin Jar
Jul 2014
7×11×43 Posts |
Quote:
Once a factor is found, there's really no reason for LL, but if someone is plugging away at it and turns in their LL result, well heck, give 'em credit even though the result is useless. I mean, it would suck to have someone turn in a factor for some 100M digit exponent just one day before you finish your run (that may have been plugging away for months or even a year or more). At least you'll get some credit out of it. On the other hand, if you try turning in an LL result for an exponent that's been factored, and you do NOT have an assignment, it won't accept it. So... that's one f'rinstance of the system rejecting unnecessary (and unassigned) results. |
|
|
|
|
|
|
#49 | |
|
Serpentine Vermin Jar
Jul 2014
CEF16 Posts |
Quote:
Testing for a "known good" residue is better than randomly double-checking something that hasn't been verified yet and may or may not have been done right the first time. I'd hate to be writing something new and get a mismatch and then spend time chasing a phantom bug. LOL For unverified results, maybe masking more than the last byte isn't a terrible idea though, or masking it entirely until the match has been made. I'm trying to think of a good case for having any portion of it visible and I can't really think of any except "because that's how it currently does it".
|
|
|
|
|
|
|
#50 | |
|
Sep 2003
5×11×47 Posts |
Quote:
The output of the GIMPS project, in the end, is a list of factors and residues. This output should be published in a timely manner. It can take a decade or more for double checks to be completed, and that's too long to wait to publish the first-time LL result. Masking the results entirely could even result in the loss of many years of work in the event of a Seventeen or Bust data disaster. Although you have good backup practices, we can't really take anything for granted in the crazy world we live in now. The current system seems to strike the right balance: publish 16−N hex digits of the unverified residue, so that in a real pinch where the final N digits were lost forever we could still accept a match with this truncated residue as a valid double check (maybe limited to a small corps of trusted double checkers in this situation), but withhold N digits so that in normal scenarios we continue to accept double checks from untrusted users and computers. Arguably we could increase N from 2 to 4 on the grounds that that's what it probably would have been in the original design if not for the need to verify against old historical 4-hex-digit residues returned by pre-GIMPS testers. A one in 256 chance of a random correct guess is maybe a bit too high. |
|
|
|
|
|
|
#51 |
|
Romulan Interpreter
Jun 2011
Thailand
961110 Posts |
1. Masking more than two hex digits won't improve the actual situation in any way.
2. Testing new code, you match 14 visible hex digits or you mask 16 visible hex digits, what the hack is the difference? What is the chance in your code (if it is honest) that 14 digits match and two not? C'mon. I sustain my opinion that the full residue (i mean 16 hex digits, not all p bits) should never be visible, and some of it should always be masked. A much better way should be that P95 and all the other LL programs submit like 20 hex digits or so, and only the last 16 are shown (always) but then the first 4 work as a "key" and the result is ignored if those won't match. Or other numbers instead of 20/16. This (or the other, never show the full 16 digits) will highly discourage fake reports... |
|
|
|
|
|
#52 | |
|
Serpentine Vermin Jar
Jul 2014
7×11×43 Posts |
Quote:
|
|
|
|
|
|
|
#53 |
|
Basketry That Evening!
"Bunslow the Bold"
Jun 2011
40<A<43 -89<O<-88
1C3516 Posts |
I would concur with increasing the mask count from 1 byte to either 2 or 3 bytes (either 4 or 6 hexits).
|
|
|
|
|
|
#54 |
|
Random Account
Aug 2009
195310 Posts |
I've never liked the manual submit, or manual assignment either. I've ran mfaktc a lot and CUDALucas just a little. For one, the results are not given to the CPU on which they were ran. I've asked several times about the possibility of modifying these applications to do their own communication with the PrimeNet server. I received some rather unpleasant responses to the idea.
I've been a project member since 2009, I believe it is, and I've often wondered about the integrity of the project from time to time. "Is it all really valid," I would think to myself. Could something have been missed, or is there something in the results which should not be there? I look at the mersenne.org page multiple times a day to see the current results. Someone above mentioned triple-checks. Where does one start with doing this? It could take years to recheck every single residue!
|
|
|
|
|
|
#55 |
|
Undefined
"The unspeakable one"
Jun 2006
My evil lair
22×1,549 Posts |
But still a lot less time than the original tests. But even if it was all redone to TC (or QC, or whatever) there would still be no guarantee of anything. There will always be those that don't trust it no matter how much validation is done.
On a practical note it would be no big deal for a single user to recheck the first set of exponents up to one million if they so desired. And I'd imagine in the not too distant future an individual could recheck exponents up to perhaps ten million or something. Basically what today seems almost unreachable would eventually become easy. So such rechecks will be done, just not today. |
|
|
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| Automatic submit results + fetch assignments for mfaktc? | DuskFalls | GPU Computing | 5 | 2017-12-02 00:34 |
| GPU id/name for manual results | preda | GPU Computing | 15 | 2017-08-16 17:34 |
| MLucas, submit results? | Sleeping_menace | Mlucas | 17 | 2015-06-13 03:12 |
| manual results | ramgeis | PrimeNet | 8 | 2013-05-30 06:33 |
| Only submit part of ECM results? | dabaichi | PrimeNet | 5 | 2011-12-07 19:27 |