![]() |
|
|
#188 |
|
Mar 2004
Belgium
7·112 Posts |
I just got "Daily quota(2) exceeded" ?
Is this normal? |
|
|
|
|
|
#189 |
|
Nov 2006
23×11 Posts |
No; it means that you are returning invalid results. Standard daily quota is 1500, and every invalid result halves the quota, and every successful result duplicates it (no more than up to 1500). This is done to stop bad computers from trashing lots of work.
Might have just been a glitch with your PC? |
|
|
|
|
|
#190 | |
|
Mar 2004
Belgium
7×112 Posts |
Quote:
|
|
|
|
|
|
|
#191 | |
|
"Sander"
Oct 2002
52.345322,5.52471
29·41 Posts |
Quote:
Why are you doing this? Who asked you to do this? Several people have told you it's wasting resources. I'm of doing some other factorizations for the time being |
|
|
|
|
|
|
#192 |
|
"Michael Kwok"
Mar 2006
22358 Posts |
There's always the manual reservations available for those who don't want double checking.
|
|
|
|
|
|
#193 |
|
Dec 2006
Anchorage, Alaska
2·3·13 Posts |
SMH:
Wow. Okay. So let's assume that one of the non-cecked results from only 5M ago was a twinprime, but wasn't noted as such do to a bad result. This would mean (if we don't doublecheck) that we would do twice the amount of work to find the next one. Now, all we can hope for at this moment is that we did not miss a twin prime, and make sure we don't miss one. The work that is required to make up for a missed twin prime is more than what I think we want.... You seem to be reacting fairly hostile. The feeling I get when I read your post is that you are very pissed off, and not thinking rationaly about the situation, and quitting because you don't like it. Personally, I don't see a problem with error checking. If it has been shown factually that there are a noticeable amount of error results, one must take corrective action. Tell me why this should not be so. Last fiddled with by Skligmund on 2007-01-06 at 03:28 |
|
|
|
|
|
#194 | |
|
P90 years forever!
Aug 2002
Yeehaw, FL
17×487 Posts |
Quote:
Look at it this way: Your goal is to find a world record twin prime in as short a time as possible. I hand you a sieved range of 10M and tell you that someone searched the lower 5M and found 90% (10% error rate) of the primes and there was no twin. Now, given your goal, would you rescan the lower 5M or would you scan the higher 5M where you *know* you will find 10 times as many new primes and 10 times as many chances of finding that twin prime? I think SMH's frustration is that 50% of the project's CPU power is searching that lower 5M where chances of success are 10 times less than searching in virgin territory. |
|
|
|
|
|
|
#195 |
|
Dec 2006
Anchorage, Alaska
2×3×13 Posts |
Well, I guess it is just my nature as an aircraft mechanic to be sure about everything that is done. I was never good at taking things for chance, and always wanted to KNOW what everything was, is, and will be.
I just noticed I have 300-400 error results from one computer in 2 days. Fortunately for me, I think almost all of them were caught. I have since repaired the problem, and don't expect to have any more errors from any of my computers. If the double-checking had not been in effect, they would have passed through as valid. That is a whole 1M miscalculated. Thats one heck of a gap to be missing IMHO. Just some opinions from me, I'll keep crunching one way or the other. :D |
|
|
|
|
|
#196 | |
|
Nov 2006
Earth
26 Posts |
Quote:
There's a "little" adjustment to that statement. Using your example, 50% of the project's CPU power is NOT searching that lower 5M (rechecking what's already been done). 50% of the project's CPU power IS double checking the virgin territory. AND it is my understanding that it's only being done to determine where the discrepancies are and the failure rate. Nowhere has Rytis stated that old WU's were being re-issued and I have not noticed any "old" WU's on my machines. He actually states in a previous post, "We are not looking back at old ones." He's just attempting to understand the current failure situation. I feel confident that as soon as he can get a reasonable idea as to what's a "normal" failure rate or even find who/what's causing the discrepancies, he'll return to 90%/10%. Maybe through this process, he might even feel confident to raise it to 95%/5% or higher. Can anyone shed some light on what's a "normal" failure/discrepancy rate??? Or maybe someone can come along with strong enough logic to guarantee a twin before 25G with 100% first pass. I'm sure there's an explanation for these discrepancies...patience will discover it. p.s. The avg. TP density is 1 twin every 13.6G. If a twin was skipped below 3G then statistically another 10.6G to 24.2G would need to be searched to possibly find the next twin. Of course if we knew the twin was below 3G then the obvious answer would be to double check 0G-3G...heck, even triple check still comes out better. But we don't know...so let's hope the error rate is extremely low or can be pinpointed to a specific cause. If the twin is found tomorrow, then we've wasted a lot of cpu cycles talking about this.
|
|
|
|
|
|
|
#197 |
|
"Michael Kwok"
Mar 2006
1,181 Posts |
|
|
|
|
|
|
#198 |
|
5·1,931 Posts |
Hello SMH, George, Rytis and everyone else here in the TwinPrime Search Forum. jmblazek brought this thread to my attention and I'd like to give my opinion on the deal.
I've read what George and SMH have said about wasting CPU cycles on a double check. At first I didn't agree because my brain is still in Riesel Sieve land where a missed prime can create massive headaches and tons of unnecessary work. However, from what I read and have chatted with Rytis about....this project is a little different. I've been mulling over the idea of new users/hosts having to complete an audit amount of workunits for double checking before they are allowed the priviledge to crunch on first run workunits. Say you had to complete 10 workunits that must ALL match before you can crunch your first workunit on the main effort. This would be a fair compromise. It would show that your computer passes certain tests before it wastes effort in trying to crunch with software that is VERY picky about CPUs, memory, powersupplies, heat issues and so forth. Many people believe that if their computer never crashes...it is good for anything....and most of us know that isn't the case when it comes to prime finding. If your computer is sending out bad residuals....then you most likely have a hardware problem that needs attention. I can't tell you how many times a simple change of a PSU has made most of our bad residual problems go away. Or simple memory timings in the mother board BIOS. So...I believe you should think about an audit period for each new user and new host. Then dedicate a few machines that audit random user/hosts full time. Let the bulk of your users steam full ahead on an effort of finding that mega twin prime. I think this is a compromise that everyone can atleast call a starting point to an agreement. Lee Stephens Head Cheese at Riesel Sieve www.rieselsieve.com |
|
![]() |
Similar Threads
|
||||
| Thread | Thread Starter | Forum | Replies | Last Post |
| BOINC | Unregistered | Information & Answers | 6 | 2010-09-21 03:31 |
| BOINC.BE | BATKrikke | Teams | 2 | 2010-03-05 18:57 |
| Boinc | Xentar | Sierpinski/Riesel Base 5 | 4 | 2009-04-25 10:26 |
| BOINC? | masser | Sierpinski/Riesel Base 5 | 1 | 2009-02-09 01:10 |
| BOINC | bebarce | Software | 3 | 2005-12-15 18:35 |